Sample records for valid code sets

  1. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  2. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  3. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  4. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare Department of Health and Human Services ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  5. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  6. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  7. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  8. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  9. Developing Electronic Health Record Algorithms That Accurately Identify Patients With Systemic Lupus Erythematosus.

    PubMed

    Barnado, April; Casey, Carolyn; Carroll, Robert J; Wheless, Lee; Denny, Joshua C; Crofford, Leslie J

    2017-05-01

    To study systemic lupus erythematosus (SLE) in the electronic health record (EHR), we must accurately identify patients with SLE. Our objective was to develop and validate novel EHR algorithms that use International Classification of Diseases, Ninth Revision (ICD-9), Clinical Modification codes, laboratory testing, and medications to identify SLE patients. We used Vanderbilt's Synthetic Derivative, a de-identified version of the EHR, with 2.5 million subjects. We selected all individuals with at least 1 SLE ICD-9 code (710.0), yielding 5,959 individuals. To create a training set, 200 subjects were randomly selected for chart review. A subject was defined as a case if diagnosed with SLE by a rheumatologist, nephrologist, or dermatologist. Positive predictive values (PPVs) and sensitivity were calculated for combinations of code counts of the SLE ICD-9 code, a positive antinuclear antibody (ANA), ever use of medications, and a keyword of "lupus" in the problem list. The algorithms with the highest PPV were each internally validated using a random set of 100 individuals from the remaining 5,759 subjects. The algorithm with the highest PPV at 95% in the training set and 91% in the validation set was 3 or more counts of the SLE ICD-9 code, ANA positive (≥1:40), and ever use of both disease-modifying antirheumatic drugs and steroids, while excluding individuals with systemic sclerosis and dermatomyositis ICD-9 codes. We developed and validated the first EHR algorithm that incorporates laboratory values and medications with the SLE ICD-9 code to identify patients with SLE accurately. © 2016, American College of Rheumatology.

  10. Validating a Monotonically-Integrated Large Eddy Simulation Code for Subsonic Jet Acoustics

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Bridges, James

    2017-01-01

    The results of subsonic jet validation cases for the Naval Research Lab's Jet Engine Noise REduction (JENRE) code are reported. Two set points from the Tanna matrix, set point 3 (Ma = 0.5, unheated) and set point 7 (Ma = 0.9, unheated) are attempted on three different meshes. After a brief discussion of the JENRE code and the meshes constructed for this work, the turbulent statistics for the axial velocity are presented and compared to experimental data, with favorable results. Preliminary simulations for set point 23 (Ma = 0.5, Tj=T1 = 1.764) on one of the meshes are also described. Finally, the proposed configuration for the farfield noise prediction with JENRE's Ffowcs-Williams Hawking solver are detailed.

  11. A test of the validity of the motivational interviewing treatment integrity code.

    PubMed

    Forsberg, Lars; Berman, Anne H; Kallmén, Håkan; Hermansson, Ulric; Helgason, Asgeir R

    2008-01-01

    To evaluate the Swedish version of the Motivational Interviewing Treatment Code (MITI), MITI coding was applied to tape-recorded counseling sessions. Construct validity was assessed using factor analysis on 120 MITI-coded sessions. Discriminant validity was assessed by comparing MITI coding of motivational interviewing (MI) sessions with information- and advice-giving sessions as well as by comparing MI-trained practitioners with untrained practitioners. A principal-axis factoring analysis yielded some evidence for MITI construct validity. MITI differentiated between practitioners with different levels of MI training as well as between MI practitioners and advice-giving counselors, thus supporting discriminant validity. MITI may be used as a training tool together with supervision to confirm and enhance MI practice in clinical settings. MITI can also serve as a tool for evaluating MI integrity in clinical research.

  12. Development and Validation of a Natural Language Processing Tool to Identify Patients Treated for Pneumonia across VA Emergency Departments.

    PubMed

    Jones, B E; South, B R; Shao, Y; Lu, C C; Leng, J; Sauer, B C; Gundlapalli, A V; Samore, M H; Zeng, Q

    2018-01-01

    Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes. This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia. Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets. Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80). System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty. Schattauer GmbH Stuttgart.

  13. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  14. Brief surgical procedure code lists for outcomes measurement and quality improvement in resource-limited settings.

    PubMed

    Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul

    2017-11-01

    The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource-limited settings. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Dynamic Forces in Spur Gears - Measurement, Prediction, and Code Validation

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Townsend, Dennis P.; Rebbechi, Brian; Lin, Hsiang Hsi

    1996-01-01

    Measured and computed values for dynamic loads in spur gears were compared to validate a new version of the NASA gear dynamics code DANST-PC. Strain gage data from six gear sets with different tooth profiles were processed to determine the dynamic forces acting between the gear teeth. Results demonstrate that the analysis code successfully simulates the dynamic behavior of the gears. Differences between analysis and experiment were less than 10 percent under most conditions.

  16. ClinicalCodes: an online clinical codes repository to improve the validity and reproducibility of research using electronic medical records.

    PubMed

    Springate, David A; Kontopantelis, Evangelos; Ashcroft, Darren M; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David

    2014-01-01

    Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects.

  17. ClinicalCodes: An Online Clinical Codes Repository to Improve the Validity and Reproducibility of Research Using Electronic Medical Records

    PubMed Central

    Springate, David A.; Kontopantelis, Evangelos; Ashcroft, Darren M.; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David

    2014-01-01

    Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects. PMID:24941260

  18. Prediction of plant lncRNA by ensemble machine learning classifiers.

    PubMed

    Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian

    2018-05-02

    In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.

  19. Validity of the International Classification of Diseases 10th revision code for hospitalisation with hyponatraemia in elderly patients

    PubMed Central

    Gandhi, Sonja; Shariff, Salimah Z; Fleet, Jamie L; Weir, Matthew A; Jain, Arsh K; Garg, Amit X

    2012-01-01

    Objective To evaluate the validity of the International Classification of Diseases, 10th Revision (ICD-10) diagnosis code for hyponatraemia (E87.1) in two settings: at presentation to the emergency department and at hospital admission. Design Population-based retrospective validation study. Setting Twelve hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Patients aged 66 years and older with serum sodium laboratory measurements at presentation to the emergency department (n=64 581) and at hospital admission (n=64 499). Main outcome measures Sensitivity, specificity, positive predictive value and negative predictive value comparing various ICD-10 diagnostic coding algorithms for hyponatraemia to serum sodium laboratory measurements (reference standard). Median serum sodium values comparing patients who were code positive and code negative for hyponatraemia. Results The sensitivity of hyponatraemia (defined by a serum sodium ≤132 mmol/l) for the best-performing ICD-10 coding algorithm was 7.5% at presentation to the emergency department (95% CI 7.0% to 8.2%) and 10.6% at hospital admission (95% CI 9.9% to 11.2%). Both specificities were greater than 99%. In the two settings, the positive predictive values were 96.4% (95% CI 94.6% to 97.6%) and 82.3% (95% CI 80.0% to 84.4%), while the negative predictive values were 89.2% (95% CI 89.0% to 89.5%) and 87.1% (95% CI 86.8% to 87.4%). In patients who were code positive for hyponatraemia, the median (IQR) serum sodium measurements were 123 (119–126) mmol/l and 125 (120–130) mmol/l in the two settings. In code negative patients, the measurements were 138 (136–140) mmol/l and 137 (135–139) mmol/l. Conclusions The ICD-10 diagnostic code for hyponatraemia differentiates between two groups of patients with distinct serum sodium measurements at both presentation to the emergency department and at hospital admission. However, these codes underestimate the true incidence of hyponatraemia due to low sensitivity. PMID:23274673

  20. Billing code algorithms to identify cases of peripheral artery disease from administrative data

    PubMed Central

    Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J

    2013-01-01

    Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724

  1. The Initial Atmospheric Transport (IAT) Code: Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, Charles W.; Bartel, Timothy James

    The Initial Atmospheric Transport (IAT) computer code was developed at Sandia National Laboratories as part of their nuclear launch accident consequences analysis suite of computer codes. The purpose of IAT is to predict the initial puff/plume rise resulting from either a solid rocket propellant or liquid rocket fuel fire. The code generates initial conditions for subsequent atmospheric transport calculations. The Initial Atmospheric Transfer (IAT) code has been compared to two data sets which are appropriate to the design space of space launch accident analyses. The primary model uncertainties are the entrainment coefficients for the extended Taylor model. The Titan 34Dmore » accident (1986) was used to calibrate these entrainment settings for a prototypic liquid propellant accident while the recent Johns Hopkins University Applied Physics Laboratory (JHU/APL, or simply APL) large propellant block tests (2012) were used to calibrate the entrainment settings for prototypic solid propellant accidents. North American Meteorology (NAM )formatted weather data profiles are used by IAT to determine the local buoyancy force balance. The IAT comparisons for the APL solid propellant tests illustrate the sensitivity of the plume elevation to the weather profiles; that is, the weather profile is a dominant factor in determining the plume elevation. The IAT code performed remarkably well and is considered validated for neutral weather conditions.« less

  2. Validity of the International Classification of Diseases 10th revision code for hyperkalaemia in elderly patients at presentation to an emergency department and at hospital admission

    PubMed Central

    Fleet, Jamie L; Shariff, Salimah Z; Gandhi, Sonja; Weir, Matthew A; Jain, Arsh K; Garg, Amit X

    2012-01-01

    Objectives Evaluate the validity of the International Classification of Diseases, 10th revision (ICD-10) code for hyperkalaemia (E87.5) in two settings: at presentation to an emergency department and at hospital admission. Design Population-based validation study. Setting 12 hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum potassium values at presentation to an emergency department (n=64 579) and at hospital admission (n=64 497). Primary outcome Sensitivity, specificity, positive-predictive value and negative-predictive value. Serum potassium values in patients with and without a hyperkalaemia code (code positive and code negative, respectively). Results The sensitivity of the best-performing ICD-10 coding algorithm for hyperkalaemia (defined by serum potassium >5.5 mmol/l) was 14.1% (95% CI 12.5% to 15.9%) at presentation to an emergency department and 14.6% (95% CI 13.3% to 16.1%) at hospital admission. Both specificities were greater than 99%. In the two settings, the positive-predictive values were 83.2% (95% CI 78.4% to 87.1%) and 62.0% (95% CI 57.9% to 66.0%), while the negative-predictive values were 97.8% (95% CI 97.6% to 97.9%) and 96.9% (95% CI 96.8% to 97.1%). In patients who were code positive for hyperkalaemia, median (IQR) serum potassium values were 6.1 (5.7 to 6.8) mmol/l at presentation to an emergency department and 6.0 (5.1 to 6.7) mmol/l at hospital admission. For code-negative patients median (IQR) serum potassium values were 4.0 (3.7 to 4.4) mmol/l and 4.1 (3.8 to 4.5) mmol/l in each of the two settings, respectively. Conclusions Patients with hospital encounters who were ICD-10 E87.5 hyperkalaemia code positive and negative had distinct higher and lower serum potassium values, respectively. However, due to very low sensitivity, the incidence of hyperkalaemia is underestimated. PMID:23274674

  3. Validation Data and Model Development for Fuel Assembly Response to Seismic Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardet, Philippe; Ricciardi, Guillaume

    2016-01-31

    Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less

  4. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  5. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (1) mode blockage, (2) liner insertion loss, (3) short ducts, and (4) mode reflection.

  6. Challenges in using medicaid claims to ascertain child maltreatment.

    PubMed

    Raghavan, Ramesh; Brown, Derek S; Allaire, Benjamin T; Garfield, Lauren D; Ross, Raven E; Hedeker, Donald

    2015-05-01

    Medicaid data contain International Classification of Diseases, Clinical Modification (ICD-9-CM) codes indicating maltreatment, yet there is a little information on how valid these codes are for the purposes of identifying maltreatment from health, as opposed to child welfare, data. This study assessed the validity of Medicaid codes in identifying maltreatment. Participants (n = 2,136) in the first National Survey of Child and Adolescent Well-Being were linked to their Medicaid claims obtained from 36 states. Caseworker determinations of maltreatment were compared with eight sets of ICD-9-CM codes. Of the 1,921 children identified by caseworkers as being maltreated, 15.2% had any relevant ICD-9-CM code in any of their Medicaid files across 4 years of observation. Maltreated boys and those of African American race had lower odds of displaying a maltreatment code. Using only Medicaid claims to identify maltreated children creates validity problems. Medicaid data linkage with other types of administrative data is required to better identify maltreated children. © The Author(s) 2014.

  7. WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruehl, Kelley; Michelen, Carlos; Bosma, Bret

    2016-08-01

    The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less

  8. Asymmetry of Peak Thicknesses between the Superior and Inferior Retinal Nerve Fiber Layers for Early Glaucoma Detection: A Simple Screening Method.

    PubMed

    Bae, Hyoung Won; Lee, Sang Yeop; Kim, Sangah; Park, Chan Keum; Lee, Kwanghyun; Kim, Chan Yun; Seong, Gong Je

    2018-01-01

    To assess whether the asymmetry in the peripapillary retinal nerve fiber layer (pRNFL) thickness between superior and inferior hemispheres on optical coherence tomography (OCT) is useful for early detection of glaucoma. The patient population consisted of Training set (a total of 60 subjects with early glaucoma and 59 normal subjects) and Validation set (30 subjects with early glaucoma and 30 normal subjects). Two kinds of ratios were employed to measure the asymmetry between the superior and inferior pRNFL thickness using OCT. One was the ratio of the superior to inferior peak thicknesses (peak pRNFL thickness ratio; PTR), and the other was the ratio of the superior to inferior average thickness (average pRNFL thickness ratio; ATR). The diagnostic abilities of the PTR and ATR were compared to the color code classification in OCT. Using the optimal cut-off values of the PTR and ATR obtained from the Training set, the two ratios were independently validated for diagnostic capability. For the Training set, the sensitivities/specificities of the PTR, ATR, quadrants color code classification, and clock-hour color code classification were 81.7%/93.2%, 71.7%/74.6%, 75.0%/93.2%, and 75.0%/79.7%, respectively. The PTR showed a better diagnostic performance for early glaucoma detection than the ATR and the clock-hour color code classification in terms of areas under the receiver operating characteristic curves (AUCs) (0.898, 0.765, and 0.773, respectively). For the Validation set, the PTR also showed the best sensitivity and AUC. The PTR is a simple method with considerable diagnostic ability for early glaucoma detection. It can, therefore, be widely used as a new screening method for early glaucoma. © Copyright: Yonsei University College of Medicine 2018

  9. TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, G.J.; Pruess

    1992-11-01

    The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for propermore » applications of TOUGH and related codes.« less

  10. NASA Radiation Protection Research for Exploration Missions

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Cucinotta, Francis A.; Tripathi, Ram K.; Heinbockel, John H.; Tweed, John; Mertens, Christopher J.; Walker, Steve A.; Blattnig, Steven R.; Zeitlin, Cary J.

    2006-01-01

    The HZETRN code was used in recent trade studies for renewed lunar exploration and currently used in engineering development of the next generation of space vehicles, habitats, and EVA equipment. A new version of the HZETRN code capable of simulating high charge and energy (HZE) ions, light-ions and neutrons with either laboratory or space boundary conditions with enhanced neutron and light-ion propagation is under development. Atomic and nuclear model requirements to support that development will be discussed. Such engineering design codes require establishing validation processes using laboratory ion beams and space flight measurements in realistic geometries. We discuss limitations of code validation due to the currently available data and recommend priorities for new data sets.

  11. Validation of Multitemperature Nozzle Flow Code

    NASA Technical Reports Server (NTRS)

    Park, Chul; Lee, Seung -Ho.

    1994-01-01

    A computer code nozzle in n-temperatures (NOZNT), which calculates one-dimensional flows of partially dissociated and ionized air in an expanding nozzle, is tested against three existing sets of experimental data taken in arcjet wind tunnels. The code accounts for the differences among various temperatures, i.e., translational-rotational temperature, vibrational temperatures of individual molecular species, and electron-electronic temperature, and the effects of impurities. The experimental data considered are (1) the spectroscopic emission data; (2) electron beam data on vibrational temperature; and (3) mass-spectrometric species concentration data. It is shown that the impurities are inconsequential for the arcjet flows, and the NOZNT code is validated by numerically reproducing the experimental data.

  12. Chronic obstructive lung disease "expert system": validation of a predictive tool for assisting diagnosis.

    PubMed

    Braido, Fulvio; Santus, Pierachille; Corsico, Angelo Guido; Di Marco, Fabiano; Melioli, Giovanni; Scichilone, Nicola; Solidoro, Paolo

    2018-01-01

    The purposes of this study were development and validation of an expert system (ES) aimed at supporting the diagnosis of chronic obstructive lung disease (COLD). A questionnaire and a WebFlex code were developed and validated in silico. An expert panel pilot validation on 60 cases and a clinical validation on 241 cases were performed. The developed questionnaire and code validated in silico resulted in a suitable tool to support the medical diagnosis. The clinical validation of the ES was performed in an academic setting that included six different reference centers for respiratory diseases. The results of the ES expressed as a score associated with the risk of suffering from COLD were matched and compared with the final clinical diagnoses. A set of 60 patients were evaluated by a pilot expert panel validation with the aim of calculating the sample size for the clinical validation study. The concordance analysis between these preliminary ES scores and diagnoses performed by the experts indicated that the accuracy was 94.7% when both experts and the system confirmed the COLD diagnosis and 86.3% when COLD was excluded. Based on these results, the sample size of the validation set was established in 240 patients. The clinical validation, performed on 241 patients, resulted in ES accuracy of 97.5%, with confirmed COLD diagnosis in 53.6% of the cases and excluded COLD diagnosis in 32% of the cases. In 11.2% of cases, a diagnosis of COLD was made by the experts, although the imaging results showed a potential concomitant disorder. The ES presented here (COLD ES ) is a safe and robust supporting tool for COLD diagnosis in primary care settings.

  13. The Validation of Macro and Micro Observations of Parent-Child Dynamics Using the Relationship Affect Coding System in Early Childhood.

    PubMed

    Dishion, Thomas J; Mun, Chung Jung; Tein, Jenn-Yun; Kim, Hanjoe; Shaw, Daniel S; Gardner, Frances; Wilson, Melvin N; Peterson, Jenene

    2017-04-01

    This study examined the validity of micro social observations and macro ratings of parent-child interaction in early to middle childhood. Seven hundred and thirty-one families representing multiple ethnic groups were recruited and screened as at risk in the context of Women, Infant, and Children (WIC) Nutritional Supplement service settings. Families were randomly assigned to the Family Checkup (FCU) intervention or the control condition at age 2 and videotaped in structured interactions in the home at ages 2, 3, 4, and 5. Parent-child interaction videotapes were micro-coded using the Relationship Affect Coding System (RACS) that captures the duration of two mutual dyadic states: positive engagement and coercion. Macro ratings of parenting skills were collected after coding the videotapes to assess parent use of positive behavior support and limit setting skills (or lack thereof). Confirmatory factor analyses revealed that the measurement model of macro ratings of limit setting and positive behavior support was not supported by the data, and thus, were excluded from further analyses. However, there was moderate stability in the families' micro social dynamics across early childhood and it showed significant improvements as a function of random assignment to the FCU. Moreover, parent-child dynamics were predictive of chronic behavior problems as rated by parents in middle childhood, but not emotional problems. We conclude with a discussion of the validity of the RACS and on methodological advantages of micro social coding over the statistical limitations of macro rating observations. Future directions are discussed for observation research in prevention science.

  14. Validity of the International Classification of Diseases, Tenth Revision code for acute kidney injury in elderly patients at presentation to the emergency department and at hospital admission

    PubMed Central

    Hwang, Y Joseph; Shariff, Salimah Z; Gandhi, Sonja; Wald, Ron; Clark, Edward; Fleet, Jamie L; Garg, Amit X

    2012-01-01

    Objective To evaluate the validity of the International Classification of Diseases, Tenth Revision (ICD-10) code N17x for acute kidney injury (AKI) in elderly patients in two settings: at presentation to the emergency department and at hospital admission. Design A population-based retrospective validation study. Setting Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum creatinine measurements at presentation to the emergency department (n=36 049) or hospital admission (n=38 566). The baseline serum creatinine measurement was a median of 102 and 39 days prior to presentation to the emergency department and hospital admission, respectively. Main outcome measures Sensitivity, specificity and positive and negative predictive values of ICD-10 diagnostic coding algorithms for AKI using a reference standard based on changes in serum creatinine from the baseline value. Median changes in serum creatinine of patients who were code positive and code negative for AKI. Results The sensitivity of the best-performing coding algorithm for AKI (defined as a ≥2-fold increase in serum creatinine concentration) was 37.4% (95% CI 32.1% to 43.1%) at presentation to the emergency department and 61.6% (95% CI 57.5% to 65.5%) at hospital admission. The specificity was greater than 95% in both settings. In patients who were code positive for AKI, the median (IQR) increase in serum creatinine from the baseline was 133 (62 to 288) µmol/l at presentation to the emergency department and 98 (43 to 200) µmol/l at hospital admission. In those who were code negative, the increase in serum creatinine was 2 (−8 to 14) and 6 (−4 to 20) µmol/l, respectively. Conclusions The presence or absence of ICD-10 code N17× differentiates two groups of patients with distinct changes in serum creatinine at the time of a hospital encounter. However, the code underestimates the true incidence of AKI due to a limited sensitivity. PMID:23204077

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  16. Convergent Validity of O*NET Holland Code Classifications

    ERIC Educational Resources Information Center

    Eggerth, Donald E.; Bowles, Shannon M.; Tunick, Roy H.; Andrew, Michael E.

    2005-01-01

    The interpretive ease and intuitive appeal of the Holland RIASEC typology have made it nearly ubiquitous in vocational guidance settings. Its incorporation into the Occupational Information Network (O*NET) has moved it another step closer to reification. This research investigated the rates of agreement between Holland code classifications from…

  17. ICF-CY code set for infants with early delay and disabilities (EDD Code Set) for interdisciplinary assessment: a global experts survey.

    PubMed

    Pan, Yi-Ling; Hwang, Ai-Wen; Simeonsson, Rune J; Lu, Lu; Liao, Hua-Fang

    2015-01-01

    Comprehensive description of functioning is important in providing early intervention services for infants with developmental delay/disabilities (DD). A code set of the International Classification of Functioning, Disability and Health: Children and Youth Version (ICF-CY) could facilitate the practical use of the ICF-CY in team evaluation. The purpose of this study was to derive an ICF-CY code set for infants under three years of age with early delay and disabilities (EDD Code Set) for initial team evaluation. The EDD Code Set based on the ICF-CY was developed on the basis of a Delphi survey of international professionals experienced in implementing the ICF-CY and professionals in early intervention service system in Taiwan. Twenty-five professionals completed the Delphi survey. A total of 82 ICF-CY second-level categories were identified for the EDD Code Set, including 28 categories from the domain Activities and Participation, 29 from body functions, 10 from body structures and 15 from environmental factors. The EDD Code Set of 82 ICF-CY categories could be useful in multidisciplinary team evaluations to describe functioning of infants younger than three years of age with DD, in a holistic manner. Future validation of the EDD Code Set and examination of its clinical utility are needed. The EDD Code Set with 82 essential ICF-CY categories could be useful in the initial team evaluation as a common language to describe functioning of infants less than three years of age with developmental delay/disabilities, with a more holistic view. The EDD Code Set including essential categories in activities and participation, body functions, body structures and environmental factors could be used to create a functional profile for each infant with special needs and to clarify the interaction of child and environment accounting for the child's functioning.

  18. The Development of Accepted Performance Items to Demonstrate Braille Competence in the Nemeth Code for Mathematics and Science Notation

    ERIC Educational Resources Information Center

    Smith, Derrick; Rosenblum, L. Penny

    2013-01-01

    Introduction: The purpose of the study presented here was the initial validation of a comprehensive set of competencies focused solely on the Nemeth code. Methods: Using the Delphi method, 20 expert panelists were recruited to participate in the study on the basis of their past experience in teaching a university-level course in the Nemeth code.…

  19. Automatic Rock Detection and Mapping from HiRISE Imagery

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Adams, Douglas S.; Cheng, Yang

    2008-01-01

    This system includes a C-code software program and a set of MATLAB software tools for statistical analysis and rock distribution mapping. The major functions include rock detection and rock detection validation. The rock detection code has been evolved into a production tool that can be used by engineers and geologists with minor training.

  20. Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors

    DOE PAGES

    Epiney, A.; Canepa, S.; Zerkak, O.; ...

    2016-11-02

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  1. The Validation of Macro and Micro Observations of Parent–Child Dynamics Using the Relationship Affect Coding System in Early Childhood

    PubMed Central

    Mun, Chung Jung; Tein, Jenn-Yun; Kim, Hanjoe; Shaw, Daniel S.; Gardner, Frances; Wilson, Melvin N.; Peterson, Jenene

    2018-01-01

    This study examined the validity of micro social observations and macro ratings of parent–child interaction in early to middle childhood. Seven hundred and thirty-one families representing multiple ethnic groups were recruited and screened as at risk in the context of Women, Infant, and Children (WIC) Nutritional Supplement service settings. Families were randomly assigned to the Family Checkup (FCU) intervention or the control condition at age 2 and videotaped in structured interactions in the home at ages 2, 3, 4, and 5. Parent–child interaction videotapes were microcoded using the Relationship Affect Coding System (RACS) that captures the duration of two mutual dyadic states: positive engagement and coercion. Macro ratings of parenting skills were collected after coding the videotapes to assess parent use of positive behavior support and limit setting skills (or lack thereof). Confirmatory factor analyses revealed that the measurement model of macro ratings of limit setting and positive behavior support was not supported by the data, and thus, were excluded from further analyses. However, there was moderate stability in the families’ micro social dynamics across early childhood and it showed significant improvements as a function of random assignment to the FCU. Moreover, parent–child dynamics were predictive of chronic behavior problems as rated by parents in middle childhood, but not emotional problems. We conclude with a discussion of the validity of the RACS and on methodological advantages of micro social coding over the statistical limitations of macro rating observations. Future directions are discussed for observation research in prevention science. PMID:27620623

  2. A Taxonomy for Mannerisms of Blind Children.

    ERIC Educational Resources Information Center

    Eichel, Valerie J.

    1979-01-01

    The investigation involving 24 blind children (2-11 years old) set out to develop and validate a coding procedure which employed a set of 34 descriptors with their corresponding definitions. The use of the taxonomy enabled a detailed, systematic study of manneristic behavior in blind children. (Author/SBH)

  3. Development and validation of an Argentine set of facial expressions of emotion.

    PubMed

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  4. Development of a novel coding scheme (SABICS) to record nurse-child interactive behaviours in a community dental preventive intervention.

    PubMed

    Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry

    2012-08-01

    To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Summary of papers on current and anticipated uses of thermal-hydraulic codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, R.

    1997-07-01

    The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less

  6. Validation of the new diagnosis grouping system for pediatric emergency department visits using the International Classification of Diseases, 10th Revision.

    PubMed

    Lee, Jin Hee; Hong, Ki Jeong; Kim, Do Kyun; Kwak, Young Ho; Jang, Hye Young; Kim, Hahn Bom; Noh, Hyun; Park, Jungho; Song, Bongkyu; Jung, Jae Yun

    2013-12-01

    A clinically sensible diagnosis grouping system (DGS) is needed for describing pediatric emergency diagnoses for research, medical resource preparedness, and making national policy for pediatric emergency medical care. The Pediatric Emergency Care Applied Research Network (PECARN) developed the DGS successfully. We developed the modified PECARN DGS based on the different pediatric population of South Korea and validated the system to obtain the accurate and comparable epidemiologic data of pediatric emergent conditions of the selected population. The data source used to develop and validate the modified PECARN DGS was the National Emergency Department Information System of South Korea, which was coded by the International Classification of Diseases, 10th Revision (ICD-10) code system. To develop the modified DGS based on ICD-10 code, we matched the selected ICD-10 codes with those of the PECARN DGS by the General Equivalence Mappings (GEMs). After converting ICD-10 codes to ICD-9 codes by GEMs, we matched ICD-9 codes into PECARN DGS categories using the matrix developed by PECARN group. Lastly, we conducted the expert panel survey using Delphi method for the remaining diagnosis codes that were not matched. A total of 1879 ICD-10 codes were used in development of the modified DGS. After 1078 (57.4%) of 1879 ICD-10 codes were assigned to the modified DGS by GEM and PECARN conversion tools, investigators assigned each of the remaining 801 codes (42.6%) to DGS subgroups by 2 rounds of electronic Delphi surveys. And we assigned the remaining 29 codes (4%) into the modified DGS at the second expert consensus meeting. The modified DGS accounts for 98.7% and 95.2% of diagnoses of the 2008 and 2009 National Emergency Department Information System data set. This modified DGS also exhibited strong construct validity using the concepts of age, sex, site of care, and seasons. This also reflected the 2009 outbreak of H1N1 influenza in Korea. We developed and validated clinically feasible and sensible DGS system for describing pediatric emergent conditions in Korea. The modified PECARN DGS showed good comprehensiveness and demonstrated reliable construct validity. This modified DGS based on PECARN DGS framework may be effectively implemented for research, reporting, and resource planning in pediatric emergency system of South Korea.

  7. Benchmark radar targets for the validation of computational electromagnetics programs

    NASA Technical Reports Server (NTRS)

    Woo, Alex C.; Wang, Helen T. G.; Schuh, Michael J.; Sanders, Michael L.

    1993-01-01

    Results are presented of a set of computational electromagnetics validation measurements referring to three-dimensional perfectly conducting smooth targets, performed for the Electromagnetic Code Consortium. Plots are presented for both the low- and high-frequency measurements of the NASA almond, an ogive, a double ogive, a cone-sphere, and a cone-sphere with a gap.

  8. Validation of a multi-layer Green's function code for ion beam transport

    NASA Astrophysics Data System (ADS)

    Walker, Steven; Tweed, John; Tripathi, Ram; Badavi, Francis F.; Miller, Jack; Zeitlin, Cary; Heilbronn, Lawrence

    To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiations is needed. In consequence, a new version of the HZETRN code capable of simulating high charge and energy (HZE) ions with either laboratory or space boundary conditions is currently under development. The new code, GRNTRN, is based on a Green's function approach to the solution of Boltzmann's transport equation and like its predecessor is deterministic in nature. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and down shift. Code validation in the laboratory environment is addressed by showing that GRNTRN accurately predicts energy loss spectra as measured by solid-state detectors in ion beam experiments with multi-layer targets. In order to validate the code with space boundary conditions, measured particle fluences are propagated through several thicknesses of shielding using both GRNTRN and the current version of HZETRN. The excellent agreement obtained indicates that GRNTRN accurately models the propagation of HZE ions in the space environment as well as in laboratory settings and also provides verification of the HZETRN propagator.

  9. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  10. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N, flow over a hollow cylinder-flare with 30 deg flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 deg and aft-cone angle of 55 deg. Both sets of experiments involve 30 deg compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  11. Validation of multi-temperature nozzle flow code NOZNT

    NASA Technical Reports Server (NTRS)

    Park, Chul; Lee, Seung-Ho

    1993-01-01

    A computer code NOZNT (Nozzle in n-Temperatures), which calculates one-dimensional flows of partially dissociated and ionized air in an expanding nozzle, is tested against five existing sets of experimental data. The code accounts for: a) the differences among various temperatures, i.e., translational-rotational temperature, vibrational temperatures of individual molecular species, and electron-electronic temperature, b) radiative cooling, and c) the effects of impurities. The experimental data considered are: 1) the sodium line reversal and 2) the electron temperature and density data, both obtained in a shock tunnel, and 3) the spectroscopic emission data, 4) electron beam data on vibrational temperature, and 5) mass-spectrometric species concentration data, all obtained in arc-jet wind tunnels. It is shown that the impurities are most likely responsible for the observed phenomena in shock tunnels. For the arc-jet flows, impurities are inconsequential and the NOZNT code is validated by numerically reproducing the experimental data.

  12. Validation of Living Donor Nephrectomy Codes

    PubMed Central

    Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.

    2018-01-01

    Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679

  13. From model conception to verification and validation, a global approach to multiphase Navier-Stoke models with an emphasis on volcanic explosive phenomenology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dartevelle, Sebastian

    2007-10-01

    Large-scale volcanic eruptions are hazardous events that cannot be described by detailed and accurate in situ measurement: hence, little to no real-time data exists to rigorously validate current computer models of these events. In addition, such phenomenology involves highly complex, nonlinear, and unsteady physical behaviors upon many spatial and time scales. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics, and inadequately constrained in terms of initial, boundary, and inflow conditions. Nevertheless, code verification and validation become even more critical because more and more volcanologists use numerical data for assessment and mitigation of volcanic hazards.more » In this report, we evaluate the process of model and code development in the context of geophysical multiphase flows. We describe: (1) the conception of a theoretical, multiphase, Navier-Stokes model, (2) its implementation into a numerical code, (3) the verification of the code, and (4) the validation of such a model within the context of turbulent and underexpanded jet physics. Within the validation framework, we suggest focusing on the key physics that control the volcanic clouds—namely, momentum-driven supersonic jet and buoyancy-driven turbulent plume. For instance, we propose to compare numerical results against a set of simple and well-constrained analog experiments, which uniquely and unambiguously represent each of the key-phenomenology. Key« less

  14. Developing a Data Set and Processing Methodology for Fluid/Structure Interaction Code Validation

    DTIC Science & Technology

    2007-06-01

    50 29. 9-Probe Wake Survey Rake Configurations...structural stability and fatigue in test article components and, in general, in facility support structures and rotating machinery blading . Both T&E... blade analysis and simulations. To ensure the accuracy of the U of CO technology, validation using flight-test data and test data from a wind tunnel

  15. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less

  16. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons.

    PubMed

    Mertz, Marcel; Sofaer, Neema; Strech, Daniel

    2014-09-27

    The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations ("reason mentions") that were identified by the review to represent a reason in a given author's publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved?

  17. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  18. Comparison Between Predicted and Experimentally Measured Flow Fields at the Exit of the SSME HPFTP Impeller

    NASA Technical Reports Server (NTRS)

    Bache, George

    1993-01-01

    Validation of CFD codes is a critical first step in the process of developing CFD design capability. The MSFC Pump Technology Team has recognized the importance of validation and has thus funded several experimental programs designed to obtain CFD quality validation data. The first data set to become available is for the SSME High Pressure Fuel Turbopump Impeller. LDV Data was taken at the impeller inlet (to obtain a reliable inlet boundary condition) and three radial positions at the impeller discharge. Our CFD code, TASCflow, is used within the Propulsion and Commercial Pump industry as a tool for pump design. The objective of this work, therefore, is to further validate TASCflow for application in pump design. TASCflow was used to predict flow at the impeller discharge for flowrates of 80, 100 and 115 percent of design flow. Comparison to data has been made with encouraging results.

  19. Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schwing, Alan

    2012-01-01

    Several comparisons of computational fluid dynamics to wind tunnel test data are shown for the purpose of code validation. The wind tunnel test, 05-CA, uses a 7.66% model of NASA's Multi-Purpose Crew Vehicle in the 11-foot test section of the Ames Unitary Plan Wind tunnel. A variety of freestream conditions over four Mach numbers and three angles of attack are considered. Test data comparisons include time-averaged integrated forces and moments, time-averaged static pressure ports on the surface, and Strouhal Number. The applicability of the US3D code to subsonic and transonic flow over a bluff body is assessed on a comprehensive data set. With close comparison, this work validates US3D for highly separated flows similar to those examined here.

  20. Computation of Thermally Perfect Compressible Flow Properties

    NASA Technical Reports Server (NTRS)

    Witte, David W.; Tatum, Kenneth E.; Williams, S. Blake

    1996-01-01

    A set of compressible flow relations for a thermally perfect, calorically imperfect gas are derived for a value of c(sub p) (specific heat at constant pressure) expressed as a polynomial function of temperature and developed into a computer program, referred to as the Thermally Perfect Gas (TPG) code. The code is available free from the NASA Langley Software Server at URL http://www.larc.nasa.gov/LSS. The code produces tables of compressible flow properties similar to those found in NACA Report 1135. Unlike the NACA Report 1135 tables which are valid only in the calorically perfect temperature regime the TPG code results are also valid in the thermally perfect, calorically imperfect temperature regime, giving the TPG code a considerably larger range of temperature application. Accuracy of the TPG code in the calorically perfect and in the thermally perfect, calorically imperfect temperature regimes are verified by comparisons with the methods of NACA Report 1135. The advantages of the TPG code compared to the thermally perfect, calorically imperfect method of NACA Report 1135 are its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture of gases, ease-of-use, and tabulated results.

  1. Validation of an International Classification of Diseases, Ninth Revision Code Algorithm for Identifying Chiari Malformation Type 1 Surgery in Adults.

    PubMed

    Greenberg, Jacob K; Ladner, Travis R; Olsen, Margaret A; Shannon, Chevis N; Liu, Jingxia; Yarbrough, Chester K; Piccirillo, Jay F; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2015-08-01

    The use of administrative billing data may enable large-scale assessments of treatment outcomes for Chiari Malformation type I (CM-1). However, to utilize such data sets, validated International Classification of Diseases, Ninth Revision (ICD-9-CM) code algorithms for identifying CM-1 surgery are needed. To validate 2 ICD-9-CM code algorithms identifying patients undergoing CM-1 decompression surgery. We retrospectively analyzed the validity of 2 ICD-9-CM code algorithms for identifying adult CM-1 decompression surgery performed at 2 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-1), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression, or laminectomy). Algorithm 2 restricted this group to patients with a primary diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. Among 340 first-time admissions identified by Algorithm 1, the overall PPV for CM-1 decompression was 65%. Among the 214 admissions identified by Algorithm 2, the overall PPV was 99.5%. The PPV for Algorithm 1 was lower in the Vanderbilt (59%) cohort, males (40%), and patients treated between 2009 and 2013 (57%), whereas the PPV of Algorithm 2 remained high (≥99%) across subgroups. The sensitivity of Algorithms 1 (86%) and 2 (83%) were above 75% in all subgroups. ICD-9-CM code Algorithm 2 has excellent PPV and good sensitivity to identify adult CM-1 decompression surgery. These results lay the foundation for studying CM-1 treatment outcomes by using large administrative databases.

  2. A Comprehensive Observational Coding Scheme for Analyzing Instrumental, Affective, and Relational Communication in Health Care Contexts

    PubMed Central

    SIMINOFF, LAURA A.; STEP, MARY M.

    2011-01-01

    Many observational coding schemes have been offered to measure communication in health care settings. These schemes fall short of capturing multiple functions of communication among providers, patients, and other participants. After a brief review of observational communication coding, the authors present a comprehensive scheme for coding communication that is (a) grounded in communication theory, (b) accounts for instrumental and relational communication, and (c) captures important contextual features with tailored coding templates: the Siminoff Communication Content & Affect Program (SCCAP). To test SCCAP reliability and validity, the authors coded data from two communication studies. The SCCAP provided reliable measurement of communication variables including tailored content areas and observer ratings of speaker immediacy, affiliation, confirmation, and disconfirmation behaviors. PMID:21213170

  3. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less

  4. On the optimality of a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner H.

    1993-01-01

    Rice developed a universal noiseless coding structure that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Variations of such noiseless coders have been used in many NASA applications. Custom VLSI coder and decoder modules capable of processing over 50 million samples per second have been fabricated and tested. In this study, the first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, for source symbol sets having a Laplacian distribution. Except for the default option, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery over a wide entropy range, and they confirm the optimality of the scheme. Comparison with other known techniques are performed on several widely used images and the results further validate the coder's optimality.

  5. Concurrent validation of an inertial measurement system to quantify kicking biomechanics in four football codes.

    PubMed

    Blair, Stephanie; Duthie, Grant; Robertson, Sam; Hopkins, William; Ball, Kevin

    2018-05-17

    Wearable inertial measurement systems (IMS) allow for three-dimensional analysis of human movements in a sport-specific setting. This study examined the concurrent validity of a IMS (Xsens MVN system) for measuring lower extremity and pelvis kinematics in comparison to a Vicon motion analysis system (MAS) during kicking. Thirty footballers from Australian football (n = 10), soccer (n = 10), rugby league and rugby union (n = 10) clubs completed 20 kicks across four conditions. Concurrent validity was assessed using a linear mixed-modelling approach, which allowed the partition of between and within-subject variance from the device measurement error. Results were expressed in raw and standardised units for assessments of differences in means and measurement error, and interpreted via non-clinical magnitude-based inferences. Trivial to small differences were found in linear velocities (foot and pelvis), angular velocities (knee, shank and thigh), sagittal joint (knee and hip) and segment angle (shank and pelvis) means (mean difference: 0.2-5.8%) between the IMS and MAS in Australian football, soccer and the rugby codes. Trivial to small measurement errors (from 0.1 to 5.8%) were found between the IMS and MAS in all kinematic parameters. The IMS demonstrated acceptable levels of concurrent validity compared to a MAS when measuring kicking biomechanics across the four football codes. Wearable IMS offers various benefits over MAS, such as, out-of-laboratory testing, larger measurement range and quick data output, to help improve the ecological validity of biomechanical testing and the timing of feedback. The results advocate the use of IMS to quantify biomechanics of high-velocity movements in sport-specific settings. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. A Validation and Code-to-Code Verification of FAST for a Megawatt-Scale Wind Turbine with Aeroelastically Tailored Blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guntur, Srinivas; Jonkman, Jason; Sievers, Ryan

    This paper presents validation and code-to-code verification of the latest version of the U.S. Department of Energy, National Renewable Energy Laboratory wind turbine aeroelastic engineering simulation tool, FAST v8. A set of 1,141 test cases, for which experimental data from a Siemens 2.3 MW machine have been made available and were in accordance with the International Electrotechnical Commission 61400-13 guidelines, were identified. These conditions were simulated using FAST as well as the Siemens in-house aeroelastic code, BHawC. This paper presents a detailed analysis comparing results from FAST with those from BHawC as well as experimental measurements, using statistics including themore » means and the standard deviations along with the power spectral densities of select turbine parameters and loads. Results indicate a good agreement among the predictions using FAST, BHawC, and experimental measurements. Here, these agreements are discussed in detail in this paper, along with some comments regarding the differences seen in these comparisons relative to the inherent uncertainties in such a model-based analysis.« less

  7. A Validation and Code-to-Code Verification of FAST for a Megawatt-Scale Wind Turbine with Aeroelastically Tailored Blades

    DOE PAGES

    Guntur, Srinivas; Jonkman, Jason; Sievers, Ryan; ...

    2017-08-29

    This paper presents validation and code-to-code verification of the latest version of the U.S. Department of Energy, National Renewable Energy Laboratory wind turbine aeroelastic engineering simulation tool, FAST v8. A set of 1,141 test cases, for which experimental data from a Siemens 2.3 MW machine have been made available and were in accordance with the International Electrotechnical Commission 61400-13 guidelines, were identified. These conditions were simulated using FAST as well as the Siemens in-house aeroelastic code, BHawC. This paper presents a detailed analysis comparing results from FAST with those from BHawC as well as experimental measurements, using statistics including themore » means and the standard deviations along with the power spectral densities of select turbine parameters and loads. Results indicate a good agreement among the predictions using FAST, BHawC, and experimental measurements. Here, these agreements are discussed in detail in this paper, along with some comments regarding the differences seen in these comparisons relative to the inherent uncertainties in such a model-based analysis.« less

  8. Simulating Small-Scale Experiments of In-Tunnel Airblast Using STUN and ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neuscamman, Stephanie; Glenn, Lewis; Schebler, Gregory

    2011-09-12

    This report details continuing validation efforts for the Sphere and Tunnel (STUN) and ALE3D codes. STUN has been validated previously for blast propagation through tunnels using several sets of experimental data with varying charge sizes and tunnel configurations, including the MARVEL nuclear driven shock tube experiment (Glenn, 2001). The DHS-funded STUNTool version is compared to experimental data and the LLNL ALE3D hydrocode. In this particular study, we compare the performance of the STUN and ALE3D codes in modeling an in-tunnel airblast to experimental results obtained by Lunderman and Ohrt in a series of small-scale high explosive experiments (1997).

  9. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.

  10. Verification and Validation (V&V) Methodologies for Multiphase Turbulent and Explosive Flows. V&V Case Studies of Computer Simulations from Los Alamos National Laboratory GMFIX codes

    NASA Astrophysics Data System (ADS)

    Dartevelle, S.

    2006-12-01

    Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements; hence, volcanic explosive phenomenology is inadequately constrained in terms of initial and inflow conditions. Consequently, little to no real-time data exist to Verify and Validate computer codes developed to model these geophysical events as a whole. However, code Verification and Validation remains a necessary step, particularly when volcanologists use numerical data for mitigation of volcanic hazards as more often performed nowadays. The Verification and Validation (V&V) process formally assesses the level of 'credibility' of numerical results produced within a range of specific applications. The first step, Verification, is 'the process of determining that a model implementation accurately represents the conceptual description of the model', which requires either exact analytical solutions or highly accurate simplified experimental data. The second step, Validation, is 'the process of determining the degree to which a model is an accurate representation of the real world', which requires complex experimental data of the 'real world' physics. The Verification step is rather simple to formally achieve, while, in the 'real world' explosive volcanism context, the second step, Validation, is about impossible. Hence, instead of validating computer code against the whole large-scale unconstrained volcanic phenomenology, we rather suggest to focus on the key physics which control these volcanic clouds, viz., momentum-driven supersonic jets and multiphase turbulence. We propose to compare numerical results against a set of simple but well-constrained analog experiments, which uniquely and unambiguously represent these two key-phenomenology separately. Herewith, we use GMFIX (Geophysical Multiphase Flow with Interphase eXchange, v1.62), a set of multiphase- CFD FORTRAN codes, which have been recently redeveloped to meet the strict Quality Assurance, verification, and validation requirements from the Office of Civilian Radioactive Waste Management of the US Dept of Energy. GMFIX solves Navier-Stokes and energy partial differential equations for each phase with appropriate turbulence and interfacial coupling between phases. For momentum-driven single- to multi-phase underexpanded jets, the position of the first Mach disk is known empirically as a function of both the pressure ratio, K, and the particle mass fraction, Phi at the nozzle. Namely, the higher K, the further downstream the Mach disk and the higher Phi, the further upstream the first Mach disk. We show that GMFIX captures these two essential features. In addition, GMFIX displays all the properties found in these jets, such as expansion fans, incident and reflected shocks, and subsequent downstream mach discs, which make this code ideal for further investigations of equivalent volcanological phenomena. One of the other most challenging aspects of volcanic phenomenology is the multiphase nature of turbulence. We also validated GMFIX in comparing the velocity profiles and turbulence quantities against well constrained analog experiments. The velocity profiles agree with the analog ones as well as these of production of turbulent quantities. Overall, the Verification and the Validation experiments although inherently challenging suggest GMFIX captures the most essential dynamical properties of multiphase and supersonic flows and jets.

  11. The moving mesh code SHADOWFAX

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; De Rijcke, S.

    2016-07-01

    We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.

  12. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  13. Identifying influenza-like illness presentation from unstructured general practice clinical narrative using a text classifier rule-based expert system versus a clinical expert.

    PubMed

    MacRae, Jayden; Love, Tom; Baker, Michael G; Dowell, Anthony; Carnachan, Matthew; Stubbe, Maria; McBain, Lynn

    2015-10-06

    We designed and validated a rule-based expert system to identify influenza like illness (ILI) from routinely recorded general practice clinical narrative to aid a larger retrospective research study into the impact of the 2009 influenza pandemic in New Zealand. Rules were assessed using pattern matching heuristics on routine clinical narrative. The system was trained using data from 623 clinical encounters and validated using a clinical expert as a gold standard against a mutually exclusive set of 901 records. We calculated a 98.2 % specificity and 90.2 % sensitivity across an ILI incidence of 12.4 % measured against clinical expert classification. Peak problem list identification of ILI by clinical coding in any month was 9.2 % of all detected ILI presentations. Our system addressed an unusual problem domain for clinical narrative classification; using notational, unstructured, clinician entered information in a community care setting. It performed well compared with other approaches and domains. It has potential applications in real-time surveillance of disease, and in assisted problem list coding for clinicians. Our system identified ILI presentation with sufficient accuracy for use at a population level in the wider research study. The peak coding of 9.2 % illustrated the need for automated coding of unstructured narrative in our study.

  14. Validation of suicide and self-harm records in the Clinical Practice Research Datalink

    PubMed Central

    Thomas, Kyla H; Davies, Neil; Metcalfe, Chris; Windmeijer, Frank; Martin, Richard M; Gunnell, David

    2013-01-01

    Aims The UK Clinical Practice Research Datalink (CPRD) is increasingly being used to investigate suicide-related adverse drug reactions. No studies have comprehensively validated the recording of suicide and nonfatal self-harm in the CPRD. We validated general practitioners' recording of these outcomes using linked Office for National Statistics (ONS) mortality and Hospital Episode Statistics (HES) admission data. Methods We identified cases of suicide and self-harm recorded using appropriate Read codes in the CPRD between 1998 and 2010 in patients aged ≥15 years. Suicides were defined as patients with Read codes for suicide recorded within 95 days of their death. International Classification of Diseases codes were used to identify suicides/hospital admissions for self-harm in the linked ONS and HES data sets. We compared CPRD-derived cases/incidence of suicide and self-harm with those identified from linked ONS mortality and HES data, national suicide incidence rates and published self-harm incidence data. Results Only 26.1% (n = 590) of the ‘true’ (ONS-confirmed) suicides were identified using Read codes. Furthermore, only 55.5% of Read code-identified suicides were confirmed as suicide by the ONS data. Of the HES-identified cases of self-harm, 68.4% were identified in the CPRD using Read codes. The CPRD self-harm rates based on Read codes had similar age and sex distributions to rates observed in self-harm hospital registers, although rates were underestimated in all age groups. Conclusions The CPRD recording of suicide using Read codes is unreliable, with significant inaccuracy (over- and under-reporting). Future CPRD suicide studies should use linked ONS mortality data. The under-reporting of self-harm appears to be less marked. PMID:23216533

  15. CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment

    NASA Technical Reports Server (NTRS)

    Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2005-01-01

    If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.

  16. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons

    PubMed Central

    2014-01-01

    Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532

  17. Parents' Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life.

    PubMed

    Illum, Niels Ove; Gradel, Kim Oren

    2017-01-01

    To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were -1.10 (range: -5.31 to 5.25) and -1.11 (range: -5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions, and, finally, codes for more complex functions. Parents can assess their own children in a valid and reliable way, and if the WHO ICF-CY second-level code data set is functioning in a clinically sound way, it can be employed as a tool for identifying the severity of disabilities and for monitoring changes in those disabilities over time. The ICF-CY codes selected in this study might be one cornerstone in forming a national or even international generic set of ICF-CY codes for the benefit of children with disabilities, their parents, and caregivers and for the whole community supporting with children with disabilities on a daily and perpetual basis.

  18. Parents’ Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life

    PubMed Central

    Illum, Niels Ove; Gradel, Kim Oren

    2017-01-01

    AIM To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. METHOD Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. RESULTS The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were −1.10 (range: −5.31 to 5.25) and −1.11 (range: −5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions, and, finally, codes for more complex functions. CONCLUSIONS Parents can assess their own children in a valid and reliable way, and if the WHO ICF-CY second-level code data set is functioning in a clinically sound way, it can be employed as a tool for identifying the severity of disabilities and for monitoring changes in those disabilities over time. The ICF-CY codes selected in this study might be one cornerstone in forming a national or even international generic set of ICF-CY codes for the benefit of children with disabilities, their parents, and caregivers and for the whole community supporting with children with disabilities on a daily and perpetual basis. PMID:28680270

  19. Parametric Studies of the Ejector Process within a Turbine-Based Combined-Cycle Propulsion System

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Walker, James F.; Trefny, Charles J.

    1999-01-01

    Performance characteristics of the ejector process within a turbine-based combined-cycle (TBCC) propulsion system are investigated using the NPARC Navier-Stokes code. The TBCC concept integrates a turbine engine with a ramjet into a single propulsion system that may efficiently operate from takeoff to high Mach number cruise. At the operating point considered, corresponding to a flight Mach number of 2.0, an ejector serves to mix flow from the ramjet duct with flow from the turbine engine. The combined flow then passes through a diffuser where it is mixed with hydrogen fuel and burned. Three sets of fully turbulent Navier-Stokes calculations are compared with predictions from a cycle code developed specifically for the TBCC propulsion system. A baseline ejector system is investigated first. The Navier-Stokes calculations indicate that the flow leaving the ejector is not completely mixed, which may adversely affect the overall system performance. Two additional sets of calculations are presented; one set that investigated a longer ejector region (to enhance mixing) and a second set which also utilized the longer ejector but replaced the no-slip surfaces of the ejector with slip (inviscid) walls in order to resolve discrepancies with the cycle code. The three sets of Navier-Stokes calculations and the TBCC cycle code predictions are compared to determine the validity of each of the modeling approaches.

  20. EBT reactor systems analysis and cost code: description and users guide (Version 1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoro, R.T.; Uckan, N.A.; Barnes, J.M.

    1984-06-01

    An ELMO Bumpy Torus (EBT) reactor systems analysis and cost code that incorporates the most recent advances in EBT physics has been written. The code determines a set of reactors that fall within an allowed operating window determined from the coupling of ring and core plasma properties and the self-consistent treatment of the coupled ring-core stability and power balance requirements. The essential elements of the systems analysis and cost code are described, along with the calculational sequences leading to the specification of the reactor options and their associated costs. The input parameters, the constraints imposed upon them, and the operatingmore » range over which the code provides valid results are discussed. A sample problem and the interpretation of the results are also presented.« less

  1. Practical color vision tests for air traffic control applicants: en route center and terminal facilities.

    PubMed

    Mertens, H W; Milburn, N J; Collins, W E

    2000-12-01

    Two practical color vision tests were developed and validated for use in screening Air Traffic Control Specialist (ATCS) applicants for work at en route center or terminal facilities. The development of the tests involved careful reproduction/simulation of color-coded materials from the most demanding, safety-critical color task performed in each type of facility. The tests were evaluated using 106 subjects with normal color vision and 85 with color vision deficiency. The en route center test, named the Flight Progress Strips Test (FPST), required the identification of critical red/black coding in computer printing and handwriting on flight progress strips. The terminal option test, named the Aviation Lights Test (ALT), simulated red/green/white aircraft lights that must be identified in night ATC tower operations. Color-coding is a non-redundant source of safety-critical information in both tasks. The FPST was validated by direct comparison of responses to strip reproductions with responses to the original flight progress strips and a set of strips selected independently. Validity was high; Kappa = 0.91 with original strips as the validation criterion and 0.86 with different strips. The light point stimuli of the ALT were validated physically with a spectroradiometer. The reliabilities of the FPST and ALT were estimated with Chronbach's alpha as 0.93 and 0.98, respectively. The high job-relevance, validity, and reliability of these tests increases the effectiveness and fairness of ATCS color vision testing.

  2. Calculated criticality for sup 235 U/graphite systems using the VIM Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, P.J.; Grasseschi, G.L.; Olsen, D.N.

    1992-01-01

    Calculations for highly enriched uranium and graphite systems gained renewed interest recently for the new production modular high-temperature gas-cooled reactor (MHTGR). Experiments to validate the physics calculations for these systems are being prepared for the Transient Reactor Test Facility (TREAT) reactor at Argonne National Laboratory (ANL-West) and in the Compact Nuclear Power Source facility at Los Alamos National Laboratory. The continuous-energy Monte Carlo code VIM, or equivalently the MCNP code, can utilize fully detailed models of the MHTGR and serve as benchmarks for the approximate multigroup methods necessary in full reactor calculations. Validation of these codes and their associated nuclearmore » data did not exist for highly enriched {sup 235}U/graphite systems. Experimental data, used in development of more approximate methods, dates back to the 1960s. The authors have selected two independent sets of experiments for calculation with the VIM code. The carbon-to-uranium (C/U) ratios encompass the range of 2,000, representative of the new production MHTGR, to the ratio of 10,000 in the fuel of TREAT. Calculations used the ENDF/B-V data.« less

  3. Calculations of key magnetospheric parameters using the isotropic and anisotropic SPSU global MHD code

    NASA Astrophysics Data System (ADS)

    Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor

    2017-04-01

    As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.

  4. Determination of the NPP Kr\\vsko spent fuel decay heat

    NASA Astrophysics Data System (ADS)

    Kromar, Marjan; Kurinčič, Bojan

    2017-07-01

    Nuclear fuel is designed to support fission process in a reactor core. Some of the isotopes, formed during the fission, decay and produce decay heat and radiation. Accurate knowledge of the nuclide inventory producing decay heat is important after reactor shut down, during the fuel storage and subsequent reprocessing or disposal. In this paper possibility to calculate the fuel isotopic composition and determination of the fuel decay heat with the Serpent code is investigated. Serpent is a well-known Monte Carlo code used primarily for the calculation of the neutron transport in a reactor. It has been validated for the burn-up calculations. In the calculation of the fuel decay heat different set of isotopes is important than in the neutron transport case. Comparison with the Origen code is performed to verify that the Serpent is taking into account all isotopes important to assess the fuel decay heat. After the code validation, a sensitivity study is carried out. Influence of several factors such as enrichment, fuel temperature, moderator temperature (density), soluble boron concentration, average power, burnable absorbers, and burnup is analyzed.

  5. A Supersonic Argon/Air Coaxial Jet Experiment for Computational Fluid Dynamics Code Validation

    NASA Technical Reports Server (NTRS)

    Clifton, Chandler W.; Cutler, Andrew D.

    2007-01-01

    A non-reacting experiment is described in which data has been acquired for the validation of CFD codes used to design high-speed air-breathing engines. A coaxial jet-nozzle has been designed to produce pressure-matched exit flows of Mach 1.8 at 1 atm in both a center jet of argon and a coflow jet of air, creating a supersonic, incompressible mixing layer. The flowfield was surveyed using total temperature, gas composition, and Pitot probes. The data set was compared to CFD code predictions made using Vulcan, a structured grid Navier-Stokes code, as well as to data from a previous experiment in which a He-O2 mixture was used instead of argon in the center jet of the same coaxial jet assembly. Comparison of experimental data from the argon flowfield and its computational prediction shows that the CFD produces an accurate solution for most of the measured flowfield. However, the CFD prediction deviates from the experimental data in the region downstream of x/D = 4, underpredicting the mixing-layer growth rate.

  6. Report on Survey Implementation. Research Triangle Institute, Caliber Associates and Human Resources Research Organization

    DTIC Science & Technology

    1994-03-01

    DISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for public release; distribution is unlimited. 13. ABSTRACT ( Maximum 200 words) This...different factors influence degree of readiness. The Army currently does not have an operational set of reliable, comprehensive, and valid measures of...the p.ro . . ....iky th .. ... - ou... ,,,,,,,uni.,.,ucceaa I&Ar COm.FI.,,, AIO W.Wa .’,U.J M-SS-oU, (b) the variable would be a valid indicator of

  7. Assessment of protein set coherence using functional annotations

    PubMed Central

    Chagoyen, Monica; Carazo, Jose M; Pascual-Montano, Alberto

    2008-01-01

    Background Analysis of large-scale experimental datasets frequently produces one or more sets of proteins that are subsequently mined for functional interpretation and validation. To this end, a number of computational methods have been devised that rely on the analysis of functional annotations. Although current methods provide valuable information (e.g. significantly enriched annotations, pairwise functional similarities), they do not specifically measure the degree of homogeneity of a protein set. Results In this work we present a method that scores the degree of functional homogeneity, or coherence, of a set of proteins on the basis of the global similarity of their functional annotations. The method uses statistical hypothesis testing to assess the significance of the set in the context of the functional space of a reference set. As such, it can be used as a first step in the validation of sets expected to be homogeneous prior to further functional interpretation. Conclusion We evaluate our method by analysing known biologically relevant sets as well as random ones. The known relevant sets comprise macromolecular complexes, cellular components and pathways described for Saccharomyces cerevisiae, which are mostly significantly coherent. Finally, we illustrate the usefulness of our approach for validating 'functional modules' obtained from computational analysis of protein-protein interaction networks. Matlab code and supplementary data are available at PMID:18937846

  8. Single-Shot Scalar-Triplet Measurements in High-Pressure Swirl-Stabilized Flames for Combustion Code Validation

    NASA Technical Reports Server (NTRS)

    Kojima, Jun; Nguyen, Quang-Viet

    2007-01-01

    In support of NASA ARMD's code validation project, we have made significant progress by providing the first quantitative single-shot multi-scalar data from a turbulent elevated-pressure (5 atm), swirl-stabilized, lean direct injection (LDI) type research burner operating on CH4-air using a spatially-resolved pulsed-laser spontaneous Raman diagnostic technique. The Raman diagnostics apparatus and data analysis that we present here were developed over the past 6 years at Glenn Research Center. From the Raman scattering data, we produce spatially-mapped probability density functions (PDFs) of the instantaneous temperature, determined using a newly developed low-resolution effective rotational bandwidth (ERB) technique. The measured 3-scalar (triplet) correlations, between temperature, CH4, and O2 concentrations, as well as their PDF s, also provide a high-level of detail into the nature and extent of the turbulent mixing process and its impact on chemical reactions in a realistic gas turbine injector flame at elevated pressures. The multi-scalar triplet data set presented here provides a good validation case for CFD combustion codes to simulate by providing both average and statistical values for the 3 measured scalars.

  9. The International Classification of Functioning (ICF) core set for breast cancer from the perspective of women with the condition.

    PubMed

    Cooney, Marese; Galvin, Rose; Connolly, Elizabeth; Stokes, Emma

    2013-05-01

    The ICF Core Set for breast cancer was generated by international experts for women who have had surgery and radiation but it has not yet been validated. The objective of the study was to validate the ICF Core Set from the perspective of women with breast cancer. A qualitative focus group methodology was used. The sessions were transcribed verbatim. Meaning units were identified by two independent researchers. The agreed list was subsequently linked to ICF categories by two independent researchers according to pre-defined linking rules. Data saturation determined the number of focus groups conducted. Quality of the data analyses was assured by multiple coding and peer review. Thirty-four women participated in seven focus groups. A total of 1621 meaning units were identified which were linked to 74 of the existing 80 Core Set categories. Additional ICF categories not currently included in the Core Set were identified by the women. The validity of the Core Set was largely supported. However, some categories currently not covered by the ICF Core Set for Breast Cancer will need to be considered for inclusion if the Core Set is to reflect all women who have had treatment for breast cancer

  10. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.; Geddes, C. G. R.; Leemans, W. P.

    2010-11-01

    The numerical modeling code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde, pronounced "inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.

  11. The pros and cons of code validation

    NASA Technical Reports Server (NTRS)

    Bobbitt, Percy J.

    1988-01-01

    Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.

  12. Remarks on CFD validation: A Boeing Commercial Airplane Company perspective

    NASA Technical Reports Server (NTRS)

    Rubbert, Paul E.

    1987-01-01

    Requirements and meaning of validation of computational fluid dynamics codes are discussed. Topics covered include: validating a code, validating a user, and calibrating a code. All results are presented in viewgraph format.

  13. Validation of extended magnetohydrodynamic simulations of the HIT-SI3 experiment using the NIMROD code

    NASA Astrophysics Data System (ADS)

    Morgan, K. D.; Jarboe, T. R.; Hossack, A. C.; Chandra, R. N.; Everson, C. J.

    2017-12-01

    The HIT-SI3 experiment uses a set of inductively driven helicity injectors to apply a non-axisymmetric current drive on the edge of the plasma, driving an axisymmetric spheromak equilibrium in a central confinement volume. These helicity injectors drive a non-axisymmetric perturbation that oscillates in time, with relative temporal phasing of the injectors modifying the mode structure of the applied perturbation. A set of three experimental discharges with different perturbation spectra are modelled using the NIMROD extended magnetohydrodynamics code, and comparisons are made to both magnetic and fluid measurements. These models successfully capture the bulk dynamics of both the perturbation and the equilibrium, though disagreements related to the pressure gradients experimentally measured exist.

  14. Analytical Plan for Roman Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strachan, Denis M.; Buck, Edgar C.; Mueller, Karl T.

    Roman glasses that have been in the sea or underground for about 1800 years can serve as the independent “experiment” that is needed for validation of codes and models that are used in performance assessment. Two sets of Roman-era glasses have been obtained for this purpose. One set comes from the sunken vessel the Iulia Felix; the second from recently excavated glasses from a Roman villa in Aquileia, Italy. The specimens contain glass artifacts and attached sediment or soil. In the case of the Iulia Felix glasses quite a lot of analytical work has been completed at the University ofmore » Padova, but from an archaeological perspective. The glasses from Aquileia have not been so carefully analyzed, but they are similar to other Roman glasses. Both glass and sediment or soil need to be analyzed and are the subject of this analytical plan. The glasses need to be analyzed with the goal of validating the model used to describe glass dissolution. The sediment and soil need to be analyzed to determine the profile of elements released from the glass. This latter need represents a significant analytical challenge because of the trace quantities that need to be analyzed. Both pieces of information will yield important information useful in the validation of the glass dissolution model and the chemical transport code(s) used to determine the migration of elements once released from the glass. In this plan, we outline the analytical techniques that should be useful in obtaining the needed information and suggest a useful starting point for this analytical effort.« less

  15. Validation and optimisation of an ICD-10-coded case definition for sepsis using administrative health data

    PubMed Central

    Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J

    2015-01-01

    Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284

  16. Development and Validation of Various Phenotyping Algorithms for Diabetes Mellitus Using Data from Electronic Health Records.

    PubMed

    Esteban, Santiago; Rodríguez Tablado, Manuel; Peper, Francisco; Mahumud, Yamila S; Ricci, Ricardo I; Kopitowski, Karin; Terrasa, Sergio

    2017-01-01

    Precision medicine requires extremely large samples. Electronic health records (EHR) are thought to be a cost-effective source of data for that purpose. Phenotyping algorithms help reduce classification errors, making EHR a more reliable source of information for research. Four algorithm development strategies for classifying patients according to their diabetes status (diabetics; non-diabetics; inconclusive) were tested (one codes-only algorithm; one boolean algorithm, four statistical learning algorithms and six stacked generalization meta-learners). The best performing algorithms within each strategy were tested on the validation set. The stacked generalization algorithm yielded the highest Kappa coefficient value in the validation set (0.95 95% CI 0.91, 0.98). The implementation of these algorithms allows for the exploitation of data from thousands of patients accurately, greatly reducing the costs of constructing retrospective cohorts for research.

  17. Validation of the MCNP6 electron-photon transport algorithm: multiple-scattering of 13- and 20-MeV electrons in thin foils

    NASA Astrophysics Data System (ADS)

    Dixon, David A.; Hughes, H. Grady

    2017-09-01

    This paper presents a validation test comparing angular distributions from an electron multiple-scattering experiment with those generated using the MCNP6 Monte Carlo code system. In this experiment, a 13- and 20-MeV electron pencil beam is deflected by thin foils with atomic numbers from 4 to 79. To determine the angular distribution, the fluence is measured down range of the scattering foil at various radii orthogonal to the beam line. The characteristic angle (the angle for which the max of the distribution is reduced by 1/e) is then determined from the angular distribution and compared with experiment. Multiple scattering foils tested herein include beryllium, carbon, aluminum, copper, and gold. For the default electron-photon transport settings, the calculated characteristic angle was statistically distinguishable from measurement and generally broader than the measured distributions. The average relative difference ranged from 5.8% to 12.2% over all of the foils, source energies, and physics settings tested. This validation illuminated a deficiency in the computation of the underlying angular distributions that is well understood. As a result, code enhancements were made to stabilize the angular distributions in the presence of very small substeps. However, the enhancement only marginally improved results indicating that additional algorithmic details should be studied.

  18. Reference View Selection in DIBR-Based Multiview Coding.

    PubMed

    Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-04-01

    Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.

  19. Code Usage Analysis System (CUAS)

    NASA Technical Reports Server (NTRS)

    Horsley, P. H.; Oliver, J. D.

    1976-01-01

    A set of computer programs is offered to aid a user in evaluating performance of an application program. The system provides reports of subroutine usage, program errors, and segment loading which occurred during the execution of an application program. It is presented in support of the development and validation of the space vehicle dynamics project.

  20. Does Psychopathy Predict Institutional Misconduct among Adults?: A Meta-Analytic Investigation

    ERIC Educational Resources Information Center

    Guy, Laura S.; Edens, John F.; Anthony, Christine; Douglas, Kevin S.

    2005-01-01

    Narrative reviews have raised several questions regarding the predictive validity of the Hare Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 2003) and related scales in institutional settings. In this meta-analysis, the authors coded 273 effect sizes to investigate the association between the Hare scales and a hierarchy of increasingly specific…

  1. Reaction path of energetic materials using THOR code

    NASA Astrophysics Data System (ADS)

    Durães, L.; Campos, J.; Portugal, A.

    1998-07-01

    The method of predicting reaction path, using THOR code, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using HL EoS. The code allows the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, calculated and discussed—pure Ammonium Nitrate and its based explosive ANFO, and Nitromethane—because their equivalence ratio is respectively lower, near and greater than the stoicheiometry. Predictions of reaction path are in good correlation with experimental values, proving the validity of proposed method.

  2. Preliminary Analysis of the Transient Reactor Test Facility (TREAT) with PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connaway, H. M.; Lee, C. H.

    The neutron transport code PROTEUS has been used to perform preliminary simulations of the Transient Reactor Test Facility (TREAT). TREAT is an experimental reactor designed for the testing of nuclear fuels and other materials under transient conditions. It operated from 1959 to 1994, when it was placed on non-operational standby. The restart of TREAT to support the U.S. Department of Energy’s resumption of transient testing is currently underway. Both single assembly and assembly-homogenized full core models have been evaluated. Simulations were performed using a historic set of WIMS-ANL-generated cross-sections as well as a new set of Serpent-generated cross-sections. To supportmore » this work, further analyses were also performed using additional codes in order to investigate particular aspects of TREAT modeling. DIF3D and the Monte-Carlo codes MCNP and Serpent were utilized in these studies. MCNP and Serpent were used to evaluate the effect of geometry homogenization on the simulation results and to support code-to-code comparisons. New meshes for the PROTEUS simulations were created using the CUBIT toolkit, with additional meshes generated via conversion of selected DIF3D models to support code-to-code verifications. All current analyses have focused on code-to-code verifications, with additional verification and validation studies planned. The analysis of TREAT with PROTEUS-SN is an ongoing project. This report documents the studies that have been performed thus far, and highlights key challenges to address in future work.« less

  3. Automated encoding of clinical documents based on natural language processing.

    PubMed

    Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George

    2004-01-01

    The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

  4. Multiple pathogen biomarker detection using an encoded bead array in droplet PCR.

    PubMed

    Periyannan Rajeswari, Prem Kumar; Soderberg, Lovisa M; Yacoub, Alia; Leijon, Mikael; Andersson Svahn, Helene; Joensson, Haakan N

    2017-08-01

    We present a droplet PCR workflow for detection of multiple pathogen DNA biomarkers using fluorescent color-coded Luminex® beads. This strategy enables encoding of multiple singleplex droplet PCRs using a commercially available bead set of several hundred distinguishable fluorescence codes. This workflow provides scalability beyond the limited number offered by fluorescent detection probes such as TaqMan probes, commonly used in current multiplex droplet PCRs. The workflow was validated for three different Luminex bead sets coupled to target specific capture oligos to detect hybridization of three microorganisms infecting poultry: avian influenza, infectious laryngotracheitis virus and Campylobacter jejuni. In this assay, the target DNA was amplified with fluorescently labeled primers by PCR in parallel in monodisperse picoliter droplets, to avoid amplification bias. The color codes of the Luminex detection beads allowed concurrent and accurate classification of the different bead sets used in this assay. The hybridization assay detected target DNA of all three microorganisms with high specificity, from samples with average target concentration of a single DNA template molecule per droplet. This workflow demonstrates the possibility of increasing the droplet PCR assay detection panel to detect large numbers of targets in parallel, utilizing the scalability offered by the color-coded Luminex detection beads. Copyright © 2017. Published by Elsevier B.V.

  5. Evaluation of Finite-Rate Gas/Surface Interaction Models for a Carbon Based Ablator

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Goekcen, Tahir

    2015-01-01

    Two sets of finite-rate gas-surface interaction model between air and the carbon surface are studied. The first set is an engineering model with one-way chemical reactions, and the second set is a more detailed model with two-way chemical reactions. These two proposed models intend to cover the carbon surface ablation conditions including the low temperature rate-controlled oxidation, the mid-temperature diffusion-controlled oxidation, and the high temperature sublimation. The prediction of carbon surface recession is achieved by coupling a material thermal response code and a Navier-Stokes flow code. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and Ablation Program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting full Navier-Stokes equations using Data Parallel Line Relaxation method. Recession analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities with heat fluxes ranging from 45 to 1100 wcm2 are performed and compared with data for model validation. The ablating material used in these arc-jet tests is Phenolic Impregnated Carbon Ablator. Additionally, computational predictions of surface recession and shape change are in good agreement with measurement for arc-jet conditions of Small Probe Reentry Investigation for Thermal Protection System Engineering.

  6. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  7. The development and validation of an activity monitoring system for use in measurement of posture of childbearing women during first stage of labor.

    PubMed

    Martin, Caroline J Hollins; Kenney, Laurence; Pratt, Thomas; Granat, Malcolm H

    2015-01-01

    There is limited understanding of the type and extent of maternal postures that midwives should encourage or support during labor. The aims of this study were to identify a set of postures and movements commonly seen during labor, to develop an activity monitoring system for use during labor, and to validate this system design. Volunteer student midwives simulated maternal activity during labor in a laboratory setting. Participants (N = 15) wore monitors adhered to the left thigh and left shank, and adopted 13 common postures of laboring women for 3 minutes each. Simulated activities were recorded using a video camera. Postures and movements were coded from the video, and statistical analysis conducted of agreement between coded video data and outputs of the activity monitoring system. Excellent agreement between the 2 raters of the video recordings was found (Cohen's κ = 0.95). Both sensitivity and specificity of the activity monitoring system were greater than 80% for standing, lying, kneeling, and sitting (legs dangling). This validated system can be used to measure elected activity of laboring women and report on effects of postures on length of first stage, pain experience, birth satisfaction, and neonatal condition. This validated maternal posture-monitoring system is available as a reference-and for use by researchers who wish to develop research in this area. © 2015 by the American College of Nurse-Midwives.

  8. DIC Challenge: Developing Images and Guidelines for Evaluating Accuracy and Resolution of 2D Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.; Toussaint, E.; Jones, Elizabeth M. C.

    With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approachesmore » and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.« less

  9. DIC Challenge: Developing Images and Guidelines for Evaluating Accuracy and Resolution of 2D Analyses

    DOE PAGES

    Reu, Phillip L.; Toussaint, E.; Jones, Elizabeth M. C.; ...

    2017-12-11

    With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approachesmore » and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.« less

  10. Efficient Modeling of Laser-Plasma Accelerators with INF and RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-11-04

    The numerical modeling code INF and RNO (INtegrated Fluid and paRticle simulatioN cOde, pronounced 'inferno') is presented. INF and RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations whilemore » still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  11. Los Alamos and Lawrence Livermore National Laboratories Code-to-Code Comparison of Inter Lab Test Problem 1 for Asteroid Impact Hazard Mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weaver, Robert P.; Miller, Paul; Howley, Kirsten

    The NNSA Laboratories have entered into an interagency collaboration with the National Aeronautics and Space Administration (NASA) to explore strategies for prevention of Earth impacts by asteroids. Assessment of such strategies relies upon use of sophisticated multi-physics simulation codes. This document describes the task of verifying and cross-validating, between Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL), modeling capabilities and methods to be employed as part of the NNSA-NASA collaboration. The approach has been to develop a set of test problems and then to compare and contrast results obtained by use of a suite of codes, includingmore » MCNP, RAGE, Mercury, Ares, and Spheral. This document provides a short description of the codes, an overview of the idealized test problems, and discussion of the results for deflection by kinetic impactors and stand-off nuclear explosions.« less

  12. 5D Tempest simulations of kinetic edge turbulence

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.; Xiong, Z.; Cohen, B. I.; Cohen, R. H.; Dorr, M. R.; Hittinger, J. A.; Kerbel, G. D.; Nevins, W. M.; Rognlien, T. D.; Umansky, M. V.; Qin, H.

    2006-10-01

    Results are presented from the development and application of TEMPEST, a nonlinear five dimensional (3d2v) gyrokinetic continuum code. The simulation results and theoretical analysis include studies of H-mode edge plasma neoclassical transport and turbulence in real divertor geometry and its relationship to plasma flow generation with zero external momentum input, including the important orbit-squeezing effect due to the large electric field flow-shear in the edge. In order to extend the code to 5D, we have formulated a set of fully nonlinear electrostatic gyrokinetic equations and a fully nonlinear gyrokinetic Poisson's equation which is valid for both neoclassical and turbulence simulations. Our 5D gyrokinetic code is built on 4D version of Tempest neoclassical code with extension to a fifth dimension in binormal direction. The code is able to simulate either a full torus or a toroidal segment. Progress on performing 5D turbulence simulations will be reported.

  13. Development of an expert based ICD-9-CM and ICD-10-CM map to AIS 2005 update 2008.

    PubMed

    Loftis, Kathryn L; Price, Janet P; Gillich, Patrick J; Cookman, Kathy J; Brammer, Amy L; St Germain, Trish; Barnes, Jo; Graymire, Vickie; Nayduch, Donna A; Read-Allsopp, Christine; Baus, Katherine; Stanley, Patsye A; Brennan, Maureen

    2016-09-01

    This article describes how maps were developed from the clinical modifications of the 9th and 10th revisions of the International Classification of Diseases (ICD) to the Abbreviated Injury Scale 2005 Update 2008 (AIS08). The development of the mapping methodology is described, with discussion of the major assumptions used in the process to map ICD codes to AIS severities. There were many intricacies to developing the maps, because the 2 coding systems, ICD and AIS, were developed for different purposes and contain unique classification structures to meet these purposes. Experts in ICD and AIS analyzed the rules and coding guidelines of both injury coding schemes to develop rules for mapping ICD injury codes to the AIS08. This involved subject-matter expertise, detailed knowledge of anatomy, and an in-depth understanding of injury terms and definitions as applied in both taxonomies. The official ICD-9-CM and ICD-10-CM versions (injury sections) were mapped to the AIS08 codes and severities, following the rules outlined in each coding manual. The panel of experts was composed of coders certified in ICD and/or AIS from around the world. In the process of developing the map from ICD to AIS, the experts created rules to address issues with the differences in coding guidelines between the 2 schemas and assure a consistent approach to all codes. Over 19,000 ICD codes were analyzed and maps were generated for each code to AIS08 chapters, AIS08 severities, and Injury Severity Score (ISS) body regions. After completion of the maps, 14,101 (74%) of the eligible 19,012 injury-related ICD-9-CM and ICD-10-CM codes were assigned valid AIS08 severity scores between 1 and 6. The remaining 4,911 codes were assigned an AIS08 of 9 (unknown) or were determined to be nonmappable because the ICD description lacked sufficient qualifying information for determining severity according to AIS rules. There were also 15,214 (80%) ICD codes mapped to AIS08 chapter and ISS body region, which allow for ISS calculations for patient data sets. This mapping between ICD and AIS provides a comprehensive, expert-designed solution for analysts to bridge the data gap between the injury descriptions provided in hospital codes (ICD-9-CM, ICD-10-CM) and injury severity codes (AIS08). By applying consistent rules from both the ICD and AIS taxonomies, the expert panel created these definitive maps, which are the only ones endorsed by the Association for the Advancement of Automotive Medicine (AAAM). Initial validation upheld the quality of these maps for the estimation of AIS severity, but future work should include verification of these maps for MAIS and ISS estimations with large data sets. These ICD-AIS maps will support data analysis from databases with injury information classified in these 2 different systems and open new doors for the investigation of injury from traumatic events using large injury data sets.

  14. A complementary graphical method for reducing and analyzing large data sets. Case studies demonstrating thresholds setting and selection.

    PubMed

    Jing, X; Cimino, J J

    2014-01-01

    Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.

  15. [DNA prints instead of plantar prints in neonatal identification].

    PubMed

    Rodríguez-Alarcón Gómez, J; Martińez de Pancorbo Gómez, M; Santillana Ferrer, L; Castro Espido, A; Melchor Maros, J C; Linares Uribe, M A; Fernández-Llebrez del Rey, L; Aranguren Dúo, G

    1996-06-22

    To check the possible usefulness in studying DNA in dried blood spots taken on filter paper blotters for newborn identification. It set out to establish: 1. The validity of the method for analysis; 2. The validity of all stored samples (such as those kept in clinical records); 3. Guarantee of non-intrusion in the genetic code; 4. Acceptable price and execution time. Forty (40) anonymous 13-year-old samples of 20 subjects (2 per subject) were studied. DNA was extracted using Chelex resin and the STR ("small tandem repeat") of microsatellite DNA was studies using the "polimerase chain reaction method" (PCR). Three non coding DNA loci (CSF1PO, TPOX and THO1) were analyzed by Multiplex amplification. It was possible to type 39 samples, making it possible to match the 20 cases (one by exclusion). The complete procedure yielded the results within 24 hours in all cases. The estimated final cost was found to be a fifth of that conventional maternity/paternity tests. The study carried out made matching possible in all 20 cases (directly in 19 cases). It was not necessary to study DNA coding areas. The validity of the method for analyzing samples stored for 13 years without any special care was also demonstrated. The technic was fast, producing the results within 24 hours, and at reasonable cost.

  16. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-06-01

    The numerical modeling code INF&RNO (INtegrated Fluid& paRticle simulatioN cOde, pronounced"inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The codemore » has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  17. Preliminary Assessment of Turbomachinery Codes

    NASA Technical Reports Server (NTRS)

    Mazumder, Quamrul H.

    2007-01-01

    This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.

  18. Validation of a Detailed Scoring Checklist for Use During Advanced Cardiac Life Support Certification

    PubMed Central

    McEvoy, Matthew D.; Smalley, Jeremy C.; Nietert, Paul J.; Field, Larry C.; Furse, Cory M.; Blenko, John W.; Cobb, Benjamin G.; Walters, Jenna L.; Pendarvis, Allen; Dalal, Nishita S.; Schaefer, John J.

    2012-01-01

    Introduction Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, prior to setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by non-expert raters during simulations of American Heart Association (AHA) MegaCodes. Methods The reliability of scores generated from a detailed set of checklists, when used by four non-expert raters, was tested by grading team leader performance in eight MegaCode scenarios. Videos of the scenarios were reviewed and rated by trained faculty facilitators and by a group of non-expert raters. The videos were reviewed “continuously” and “with pauses.” Two content experts served as the reference standard for grading, and four non-expert raters were used to test the reliability of the checklists. Results Our results demonstrate that non-expert raters are able to produce reliable grades when using the checklists under consideration, demonstrating excellent intra-rater reliability and agreement with a reference standard. The results also demonstrate that non-expert raters can be trained in the proper use of the checklist in a short amount of time, with no discernible learning curve thereafter. Finally, our results show that a single trained rater can achieve reliable scores of team leader performance during AHA MegaCodes when using our checklist in continuous mode, as measures of agreement in total scoring were very strong (Lin’s Concordance Correlation Coefficient = 0.96; Intraclass Correlation Coefficient = 0.97). Discussion We have shown that our checklists can yield reliable scores, are appropriate for use by non-expert raters, and are able to be employed during continuous assessment of team leader performance during the review of a simulated MegaCode. This checklist may be more appropriate for use by Advanced Cardiac Life Support (ACLS) instructors during MegaCode assessments than current tools provided by the AHA. PMID:22863996

  19. Validity and reliability of the Fitbit Zip as a measure of preschool children’s step count

    PubMed Central

    Sharp, Catherine A; Mackintosh, Kelly A; Erjavec, Mihela; Pascoe, Duncan M; Horne, Pauline J

    2017-01-01

    Objectives Validation of physical activity measurement tools is essential to determine the relationship between physical activity and health in preschool children, but research to date has not focused on this priority. The aims of this study were to ascertain inter-rater reliability of observer step count, and interdevice reliability and validity of Fitbit Zip accelerometer step counts in preschool children. Methods Fifty-six children aged 3–4 years (29 girls) recruited from 10 nurseries in North Wales, UK, wore two Fitbit Zip accelerometers while performing a timed walking task in their childcare settings. Accelerometers were worn in secure pockets inside a custom-made tabard. Video recordings enabled two observers to independently code the number of steps performed in 3 min by each child during the walking task. Intraclass correlations (ICCs), concordance correlation coefficients, Bland-Altman plots and absolute per cent error were calculated to assess the reliability and validity of the consumer-grade device. Results An excellent ICC was found between the two observer codings (ICC=1.00) and the two Fitbit Zips (ICC=0.91). Concordance between the Fitbit Zips and observer counts was also high (r=0.77), with an acceptable absolute per cent error (6%–7%). Bland-Altman analyses identified a bias for Fitbit 1 of 22.8±19.1 steps with limits of agreement between −14.7 and 60.2 steps, and a bias for Fitbit 2 of 25.2±23.2 steps with limits of agreement between −20.2 and 70.5 steps. Conclusions Fitbit Zip accelerometers are a reliable and valid method of recording preschool children’s step count in a childcare setting. PMID:29081984

  20. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE PAGES

    Xia, Yidong; Wang, Chuanjin; Luo, Hong; ...

    2015-12-15

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  1. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  2. Developing and Implementing the Data Mining Algorithms in RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less

  3. RY-Coding and Non-Homogeneous Models Can Ameliorate the Maximum-Likelihood Inferences From Nucleotide Sequence Data with Parallel Compositional Heterogeneity.

    PubMed

    Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo

    2012-01-01

    In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.

  4. Student perception of travel service learning experience in Morocco.

    PubMed

    Puri, Aditi; Kaddoura, Mahmoud; Dominick, Christine

    2013-08-01

    This study explores the perceptions of health profession students participating in academic service learning in Morocco with respect to adapting health care practices to cultural diversity. Authors utilized semi-structured, open-ended interviews to explore the perceptions of health profession students. Nine dental hygiene and nursing students who traveled to Morocco to provide oral and general health services were interviewed. After interviews were recorded, they were transcribed verbatim to ascertain descriptive validity and to generate inductive and deductive codes that constitute the major themes of the data analysis. Thereafter, NVIVO 8 was used to rapidly determine the frequency of applied codes. The authors compared the codes and themes to establish interpretive validity. Codes and themes were initially determined independently by co-authors and applied to the data subsequently. The authors compared the applied codes to establish intra-rater reliability. International service learning experiences led to perceptions of growth as a health care provider among students. The application of knowledge and skills learned in academic programs and service learning settings were found to help in bridging the theory-practice gap. The specific experience enabled students to gain an understanding of diverse health care and cultural practices in Morocco. Students perceived that the experience gained in international service learning can heighten awareness of diverse cultural and health care practices to foster professional growth of health professionals.

  5. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors

    PubMed Central

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168

  6. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors.

    PubMed

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization.

  7. Methodology, status and plans for development and assessment of the code ATHLET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Austregesilo, H.; Lerchl, G.

    1997-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs with only one code. The main code features are: advanced thermal-hydraulics; modular code architecture; separation between physical models and numerical methods; pre- and post-processing tools; portability. The codemore » has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialization by a steady-state calculation, full-range drift-flux model, dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The code development is accompained by a systematic and comprehensive validation program. A large number of integral experiments and separate effect tests, including the major International Standard Problems, have been calculated by GRS and by independent organizations. The ATHLET validation matrix is a well balanced set of integral and separate effects tests derived from the CSNI proposal emphasizing, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities.« less

  8. A method for radiological characterization based on fluence conversion coefficients

    NASA Astrophysics Data System (ADS)

    Froeschl, Robert

    2018-06-01

    Radiological characterization of components in accelerator environments is often required to ensure adequate radiation protection during maintenance, transport and handling as well as for the selection of the proper disposal pathway. The relevant quantities are typical the weighted sums of specific activities with radionuclide-specific weighting coefficients. Traditional methods based on Monte Carlo simulations are radionuclide creation-event based or the particle fluences in the regions of interest are scored and then off-line weighted with radionuclide production cross sections. The presented method bases the radiological characterization on a set of fluence conversion coefficients. For a given irradiation profile and cool-down time, radionuclide production cross-sections, material composition and radionuclide-specific weighting coefficients, a set of particle type and energy dependent fluence conversion coefficients is computed. These fluence conversion coefficients can then be used in a Monte Carlo transport code to perform on-line weighting to directly obtain the desired radiological characterization, either by using built-in multiplier features such as in the PHITS code or by writing a dedicated user routine such as for the FLUKA code. The presented method has been validated against the standard event-based methods directly available in Monte Carlo transport codes.

  9. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    PubMed

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  10. Development of code evaluation criteria for assessing predictive capability and performance

    NASA Technical Reports Server (NTRS)

    Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.

    1993-01-01

    Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.

  11. Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT

    PubMed Central

    Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896

  12. PSI-Center Validation Studies

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2014-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  13. StarHorse: a Bayesian tool for determining stellar masses, ages, distances, and extinctions for field stars

    NASA Astrophysics Data System (ADS)

    Queiroz, A. B. A.; Anders, F.; Santiago, B. X.; Chiappini, C.; Steinmetz, M.; Dal Ponte, M.; Stassun, K. G.; da Costa, L. N.; Maia, M. A. G.; Crestani, J.; Beers, T. C.; Fernández-Trincado, J. G.; García-Hernández, D. A.; Roman-Lopes, A.; Zamora, O.

    2018-05-01

    Understanding the formation and evolution of our Galaxy requires accurate distances, ages, and chemistry for large populations of field stars. Here, we present several updates to our spectrophotometric distance code, which can now also be used to estimate ages, masses, and extinctions for individual stars. Given a set of measured spectrophotometric parameters, we calculate the posterior probability distribution over a given grid of stellar evolutionary models, using flexible Galactic stellar-population priors. The code (called StarHorse) can accommodate different observational data sets, prior options, partially missing data, and the inclusion of parallax information into the estimated probabilities. We validate the code using a variety of simulated stars as well as real stars with parameters determined from asteroseismology, eclipsing binaries, and isochrone fits to star clusters. Our main goal in this validation process is to test the applicability of the code to field stars with known Gaia-like parallaxes. The typical internal precisions (obtained from realistic simulations of an APOGEE+Gaia-like sample) are {˜eq } 8 {per cent} in distance, {˜eq } 20 {per cent} in age, {˜eq } 6 {per cent} in mass, and ≃ 0.04 mag in AV. The median external precision (derived from comparisons with earlier work for real stars) varies with the sample used, but lies in the range of {˜eq } [0,2] {per cent} for distances, {˜eq } [12,31] {per cent} for ages, {˜eq } [4,12] {per cent} for masses, and ≃ 0.07 mag for AV. We provide StarHorse distances and extinctions for the APOGEE DR14, RAVE DR5, GES DR3, and GALAH DR1 catalogues.

  14. MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.

    2001-01-01

    The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations of clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including LCO behavior.

  15. MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.

    2001-01-01

    The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including Limit Cycle Oscillation behavior.

  16. A combinatorial code for pattern formation in Drosophila oogenesis.

    PubMed

    Yakoby, Nir; Bristow, Christopher A; Gong, Danielle; Schafer, Xenia; Lembong, Jessica; Zartman, Jeremiah J; Halfon, Marc S; Schüpbach, Trudi; Shvartsman, Stanislav Y

    2008-11-01

    Two-dimensional patterning of the follicular epithelium in Drosophila oogenesis is required for the formation of three-dimensional eggshell structures. Our analysis of a large number of published gene expression patterns in the follicle cells suggests that they follow a simple combinatorial code based on six spatial building blocks and the operations of union, difference, intersection, and addition. The building blocks are related to the distribution of inductive signals, provided by the highly conserved epidermal growth factor receptor and bone morphogenetic protein signaling pathways. We demonstrate the validity of the code by testing it against a set of patterns obtained in a large-scale transcriptional profiling experiment. Using the proposed code, we distinguish 36 distinct patterns for 81 genes expressed in the follicular epithelium and characterize their joint dynamics over four stages of oogenesis. The proposed combinatorial framework allows systematic analysis of the diversity and dynamics of two-dimensional transcriptional patterns and guides future studies of gene regulation.

  17. Comparisons of survival predictions using survival risk ratios based on International Classification of Diseases, Ninth Revision and Abbreviated Injury Scale trauma diagnosis codes.

    PubMed

    Clarke, John R; Ragone, Andrew V; Greenwald, Lloyd

    2005-09-01

    We conducted a comparison of methods for predicting survival using survival risk ratios (SRRs), including new comparisons based on International Classification of Diseases, Ninth Revision (ICD-9) versus Abbreviated Injury Scale (AIS) six-digit codes. From the Pennsylvania trauma center's registry, all direct trauma admissions were collected through June 22, 1999. Patients with no comorbid medical diagnoses and both ICD-9 and AIS injury codes were used for comparisons based on a single set of data. SRRs for ICD-9 and then for AIS diagnostic codes were each calculated two ways: from the survival rate of patients with each diagnosis and when each diagnosis was an isolated diagnosis. Probabilities of survival for the cohort were calculated using each set of SRRs by the multiplicative ICISS method and, where appropriate, the minimum SRR method. These prediction sets were then internally validated against actual survival by the Hosmer-Lemeshow goodness-of-fit statistic. The 41,364 patients had 1,224 different ICD-9 injury diagnoses in 32,261 combinations and 1,263 corresponding AIS injury diagnoses in 31,755 combinations, ranging from 1 to 27 injuries per patient. All conventional ICD-9-based combinations of SRRs and methods had better Hosmer-Lemeshow goodness-of-fit statistic fits than their AIS-based counterparts. The minimum SRR method produced better calibration than the multiplicative methods, presumably because it did not magnify inaccuracies in the SRRs that might occur with multiplication. Predictions of survival based on anatomic injury alone can be performed using ICD-9 codes, with no advantage from extra coding of AIS diagnoses. Predictions based on the single worst SRR were closer to actual outcomes than those based on multiplying SRRs.

  18. On the validation of a code and a turbulence model appropriate to circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.

    1988-01-01

    A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.

  19. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expandedmore » to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  20. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    PubMed Central

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold

    2016-01-01

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian’s kmc. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems. PMID:27370123

  1. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model.

    PubMed

    Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold

    2016-07-01

    The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.

  2. A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bravenec, Ronald

    My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less thanmore » half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.« less

  3. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs,more » and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.« less

  4. Platelet function is modified by common sequence variation in megakaryocyte super enhancers

    PubMed Central

    Petersen, Romina; Lambourne, John J.; Javierre, Biola M.; Grassi, Luigi; Kreuzhuber, Roman; Ruklisa, Dace; Rosa, Isabel M.; Tomé, Ana R.; Elding, Heather; van Geffen, Johanna P.; Jiang, Tao; Farrow, Samantha; Cairns, Jonathan; Al-Subaie, Abeer M.; Ashford, Sofie; Attwood, Antony; Batista, Joana; Bouman, Heleen; Burden, Frances; Choudry, Fizzah A.; Clarke, Laura; Flicek, Paul; Garner, Stephen F.; Haimel, Matthias; Kempster, Carly; Ladopoulos, Vasileios; Lenaerts, An-Sofie; Materek, Paulina M.; McKinney, Harriet; Meacham, Stuart; Mead, Daniel; Nagy, Magdolna; Penkett, Christopher J.; Rendon, Augusto; Seyres, Denis; Sun, Benjamin; Tuna, Salih; van der Weide, Marie-Elise; Wingett, Steven W.; Martens, Joost H.; Stegle, Oliver; Richardson, Sylvia; Vallier, Ludovic; Roberts, David J.; Freson, Kathleen; Wernisch, Lorenz; Stunnenberg, Hendrik G.; Danesh, John; Fraser, Peter; Soranzo, Nicole; Butterworth, Adam S.; Heemskerk, Johan W.; Turro, Ernest; Spivakov, Mikhail; Ouwehand, Willem H.; Astle, William J.; Downes, Kate; Kostadima, Myrto; Frontini, Mattia

    2017-01-01

    Linking non-coding genetic variants associated with the risk of diseases or disease-relevant traits to target genes is a crucial step to realize GWAS potential in the introduction of precision medicine. Here we set out to determine the mechanisms underpinning variant association with platelet quantitative traits using cell type-matched epigenomic data and promoter long-range interactions. We identify potential regulatory functions for 423 of 565 (75%) non-coding variants associated with platelet traits and we demonstrate, through ex vivo and proof of principle genome editing validation, that variants in super enhancers play an important role in controlling archetypical platelet functions. PMID:28703137

  5. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits.

    PubMed

    Ginde, Adit A; Blanc, Phillip G; Lieberman, Rebecca M; Camargo, Carlos A

    2008-04-01

    Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3). We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64%) cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8), often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2) identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2%) true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86-92) for detecting hypoglycemia visits. The proposed algorithm improves on prior strategies to identify hypoglycemia visits in administrative data sets and will enhance the ability to study the epidemiology and design interventions for this important complication of diabetes care.

  6. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018

  7. A coded structured light system based on primary color stripe projection and monochrome imaging.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  8. Off-design computer code for calculating the aerodynamic performance of axial-flow fans and compressors

    NASA Technical Reports Server (NTRS)

    Schmidt, James F.

    1995-01-01

    An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.

  9. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    DTIC Science & Technology

    2014-03-27

    VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR... PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR MEASUREMENTS OF AN IRON BOX THESIS Presented to the Faculty Department of Engineering...STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED iv AFIT-ENP-14-M-05 VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6

  10. A Matlab Program for Textural Classification Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Leite, E. P.; de Souza, C.

    2008-12-01

    A new MATLAB code that provides tools to perform classification of textural images for applications in the Geosciences is presented. The program, here coined TEXTNN, comprises the computation of variogram maps in the frequency domain for specific lag distances in the neighborhood of a pixel. The result is then converted back to spatial domain, where directional or ominidirectional semivariograms are extracted. Feature vectors are built with textural information composed of the semivariance values at these lag distances and, moreover, with histogram measures of mean, standard deviation and weighted fill-ratio. This procedure is applied to a selected group of pixels or to all pixels in an image using a moving window. A feed- forward back-propagation Neural Network can then be designed and trained on feature vectors of predefined classes (training set). The training phase minimizes the mean-squared error on the training set. Additionally, at each iteration, the mean-squared error for every validation is assessed and a test set is evaluated. The program also calculates contingency matrices, global accuracy and kappa coefficient for the three data sets, allowing a quantitative appraisal of the predictive power of the Neural Network models. The interpreter is able to select the best model obtained from a k-fold cross-validation or to use a unique split-sample data set for classification of all pixels in a given textural image. The code is opened to the geoscientific community and is very flexible, allowing the experienced user to modify it as necessary. The performance of the algorithms and the end-user program were tested using synthetic images, orbital SAR (RADARSAT) imagery for oil seepage detection, and airborne, multi-polarimetric SAR imagery for geologic mapping. The overall results proved very promising.

  11. Statistical inference of static analysis rules

    NASA Technical Reports Server (NTRS)

    Engler, Dawson Richards (Inventor)

    2009-01-01

    Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.

  12. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record.

    PubMed

    Wright, Adam; Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W

    2011-01-01

    Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100,000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100,000 records to assess its accuracy. Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100,000 randomly selected patients showed high sensitivity (range: 62.8-100.0%) and positive predictive value (range: 79.8-99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts.

  13. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less

  14. Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A

    2011-01-01

    The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less

  15. AEOLUS: A MARKOV CHAIN MONTE CARLO CODE FOR MAPPING ULTRACOOL ATMOSPHERES. AN APPLICATION ON JUPITER AND BROWN DWARF HST LIGHT CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karalidi, Theodora; Apai, Dániel; Schneider, Glenn

    Deducing the cloud cover and its temporal evolution from the observed planetary spectra and phase curves can give us major insight into the atmospheric dynamics. In this paper, we present Aeolus, a Markov chain Monte Carlo code that maps the structure of brown dwarf and other ultracool atmospheres. We validated Aeolus on a set of unique Jupiter Hubble Space Telescope (HST) light curves. Aeolus accurately retrieves the properties of the major features of the Jovian atmosphere, such as the Great Red Spot and a major 5 μm hot spot. Aeolus is the first mapping code validated on actual observations of amore » giant planet over a full rotational period. For this study, we applied Aeolus to J- and H-band HST light curves of 2MASS J21392676+0220226 and 2MASS J0136565+093347. Aeolus retrieves three spots at the top of the atmosphere (per observational wavelength) of these two brown dwarfs, with a surface coverage of 21% ± 3% and 20.3% ± 1.5%, respectively. The Jupiter HST light curves will be publicly available via ADS/VIZIR.« less

  16. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design

  17. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less

  18. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  19. Idiographic measurement of depressive thinking: development and preliminary validation of the Sentence Completion Test for Chronic Pain (SCP).

    PubMed

    Rusu, Adina C; Hallner, Dirk

    2018-06-06

    Depression is a common feature of chronic pain, but there is only limited research into the content and frequency of depressed cognitions in pain patients. This study describes the development of the Sentence Completion Test for Chronic Pain (SCP), an idiographic measure for assessing depressive thinking in chronic pain patients. The sentence completion task requires participants to finish incomplete sentences using their own words to a set of predefined stems that include negative, positive and neutral valenced self-referenced words. In addition, the stems include past, future and world stems, which reflect the theoretical negative triad typical to depression. Complete responses are coded by valence (negative, positive and neutral), pain and health-related content. A total of 89 participants were included in this study. Forty seven adult out-patients formed the depressed pain group and were compared to a non-clinical control sample of 42 healthy control participants. This study comprised several phases: (1) theory-driven generation of coding rules; (2) the development of a coding manual by a panel of experts (3) comparing reliability of coding by expert raters without the use of the coding manual and with the use of the coding manual; (4) preliminary analyses of the construct validity of the SCP. The internal consistency of the SCP was tested using the Kuder-Richardson coefficient (KR-20). Inter-rater agreement was assessed by intra-class correlations (ICC). The content and construct validity of the SCP was investigated by correlation coefficients between SCP negative completions, the Hospital Anxiety and Depression Scale (HADS) depression scores and the number of symptoms on the Structured Clinical Interview for DSM-IV-TR (SCID). As predicted for content validity, the number of SCP negative statements was significantly greater in the depressed pain group and this group also produced significantly fewer positive statements, compared to the healthy control group. The number of negative pain completions and negative health completions was significantly greater in the depressed pain group. As expected, in the depressed pain group, the correlation between SCP negatives and the HADS Depression score was r=0.60 and the correlation between SCP negatives and the number of symptoms on the SCID was r=0.56. The SCP demonstrated good content validity, internal consistency and inter-rater reliability. Uses for this measure, such as complementing questionnaire measures by an idiographic assessment of depressive thinking and generating hypotheses about key problems within a cognitive-behavioural case-formulation, are suggested.

  20. (NTF) National Transonic Facility Test 213-SFW Flow Control II,

    NASA Image and Video Library

    2012-11-19

    (NTF) National Transonic Facility Test 213-SFW Flow Control II, Fast-MAC Model: The fundamental Aerodynamics Subsonic Transonic-Modular Active Control (Fast-MAC) Model was tested for the 2nd time in the NTF. The objectives were to document the effects of Reynolds numbers on circulation control aerodynamics and to develop and open data set for CFD code validation. Image taken in building 1236, National Transonic Facility

  1. Validation of two case definitions to identify pressure ulcers using hospital administrative data

    PubMed Central

    Ho, Chester; Jiang, Jason; Eastwood, Cathy A; Wong, Holly; Weaver, Brittany; Quan, Hude

    2017-01-01

    Objective Pressure ulcer development is a quality of care indicator, as pressure ulcers are potentially preventable. Yet pressure ulcer is a leading cause of morbidity, discomfort and additional healthcare costs for inpatients. Methods are lacking for accurate surveillance of pressure ulcer in hospitals to track occurrences and evaluate care improvement strategies. The main study aim was to validate hospital discharge abstract database (DAD) in recording pressure ulcers against nursing consult reports, and to calculate prevalence of pressure ulcers in Alberta, Canada in DAD. We hypothesised that a more inclusive case definition for pressure ulcers would enhance validity of cases identified in administrative data for research and quality improvement purposes. Setting A cohort of patients with pressure ulcers were identified from enterostomal (ET) nursing consult documents at a large university hospital in 2011. Participants There were 1217 patients with pressure ulcers in ET nursing documentation that were linked to a corresponding record in DAD to validate DAD for correct and accurate identification of pressure ulcer occurrence, using two case definitions for pressure ulcer. Results Using pressure ulcer definition 1 (7 codes), prevalence was 1.4%, and using definition 2 (29 codes), prevalence was 4.2% after adjusting for misclassifications. The results were lower than expected. Definition 1 sensitivity was 27.7% and specificity was 98.8%, while definition 2 sensitivity was 32.8% and specificity was 95.9%. Pressure ulcer in both DAD and ET consultation increased with age, number of comorbidities and length of stay. Conclusion DAD underestimate pressure ulcer prevalence. Since various codes are used to record pressure ulcers in DAD, the case definition with more codes captures more pressure ulcer cases, and may be useful for monitoring facility trends. However, low sensitivity suggests that this data source may not be accurate for determining overall prevalence, and should be cautiously compared with other prevalence studies. PMID:28851785

  2. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  3. Study of steam condensation at sub-atmospheric pressure: setting a basic research using MELCOR code

    NASA Astrophysics Data System (ADS)

    Manfredini, A.; Mazzini, M.

    2017-11-01

    One of the most serious accidents that can occur in the experimental nuclear fusion reactor ITER is the break of one of the headers of the refrigeration system of the first wall of the Tokamak. This results in water-steam mixture discharge in vacuum vessel (VV), with consequent pressurization of this container. To prevent the pressure in the VV exceeds 150 KPa absolute, a system discharges the steam inside a suppression pool, at an absolute pressure of 4.2 kPa. The computer codes used to analyze such incident (eg. RELAP 5 or MELCOR) are not validated experimentally for such conditions. Therefore, we planned a basic research, in order to have experimental data useful to validate the heat transfer correlations used in these codes. After a thorough literature search on this topic, ACTA, in collaboration with the staff of ITER, defined the experimental matrix and performed the design of the experimental apparatus. For the thermal-hydraulic design of the experiments, we executed a series of calculations by MELCOR. This code, however, was used in an unconventional mode, with the development of models suited respectively to low and high steam flow-rate tests. The article concludes with a discussion of the placement of experimental data within the map featuring the phenomenon characteristics, showing the importance of the new knowledge acquired, particularly in the case of chugging.

  4. Impact of GNSS orbit modeling on LEO orbit and gravity field determination

    NASA Astrophysics Data System (ADS)

    Arnold, Daniel; Meyer, Ulrich; Sušnik, Andreja; Dach, Rolf; Jäggi, Adrian

    2017-04-01

    On January 4, 2015 the Center for Orbit Determination in Europe (CODE) changed the solar radiation pressure modeling for GNSS satellites to an updated version of the empirical CODE orbit model (ECOM). Furthermore, since September 2012 CODE operationally computes satellite clock corrections not only for the 3-day long-arc solutions, but also for the non-overlapping 1-day GNSS orbits. This provides different sets of GNSS products for Precise Point Positioning, as employed, e.g., in the GNSS-based precise orbit determination of low Earth orbiters (LEOs) and the subsequent Earth gravity field recovery from kinematic LEO orbits. While the impact of the mentioned changes in orbit modeling and solution strategy on the GNSS orbits and geophysical parameters was studied in detail, their implications on the LEO orbits were not yet analyzed. We discuss the impact of the update of the ECOM and the influence of 1-day and 3-day GNSS orbit solutions on zero-difference LEO orbit and gravity field determination, where the GNSS orbits and clock corrections, as well as the Earth rotation parameters are introduced as fixed external products. Several years of kinematic and reduced-dynamic orbits for the two GRACE LEOs are computed with GNSS products based on both the old and the updated ECOM, as well as with 1- and 3-day GNSS products. The GRACE orbits are compared by means of standard validation measures. Furthermore, monthly and long-term GPS-only and combined GPS/K-band gravity field solutions are derived from the different sets of kinematic LEO orbits. GPS-only fields are validated by comparison to combined GPS/K-band solutions, while the combined solutions are validated by analysis of the formal errors, as well as by comparing them to the combined GRACE solutions of the European Gravity Service for Improved Emergency Management (EGSIEM) project.

  5. Identification of Long Bone Fractures in Radiology Reports Using Natural Language Processing to support Healthcare Quality Improvement.

    PubMed

    Grundmeier, Robert W; Masino, Aaron J; Casper, T Charles; Dean, Jonathan M; Bell, Jamie; Enriquez, Rene; Deakyne, Sara; Chamberlain, James M; Alpern, Elizabeth R

    2016-11-09

    Important information to support healthcare quality improvement is often recorded in free text documents such as radiology reports. Natural language processing (NLP) methods may help extract this information, but these methods have rarely been applied outside the research laboratories where they were developed. To implement and validate NLP tools to identify long bone fractures for pediatric emergency medicine quality improvement. Using freely available statistical software packages, we implemented NLP methods to identify long bone fractures from radiology reports. A sample of 1,000 radiology reports was used to construct three candidate classification models. A test set of 500 reports was used to validate the model performance. Blinded manual review of radiology reports by two independent physicians provided the reference standard. Each radiology report was segmented and word stem and bigram features were constructed. Common English "stop words" and rare features were excluded. We used 10-fold cross-validation to select optimal configuration parameters for each model. Accuracy, recall, precision and the F1 score were calculated. The final model was compared to the use of diagnosis codes for the identification of patients with long bone fractures. There were 329 unique word stems and 344 bigrams in the training documents. A support vector machine classifier with Gaussian kernel performed best on the test set with accuracy=0.958, recall=0.969, precision=0.940, and F1 score=0.954. Optimal parameters for this model were cost=4 and gamma=0.005. The three classification models that we tested all performed better than diagnosis codes in terms of accuracy, precision, and F1 score (diagnosis code accuracy=0.932, recall=0.960, precision=0.896, and F1 score=0.927). NLP methods using a corpus of 1,000 training documents accurately identified acute long bone fractures from radiology reports. Strategic use of straightforward NLP methods, implemented with freely available software, offers quality improvement teams new opportunities to extract information from narrative documents.

  6. Comparing Spray Characteristics from Reynolds Averaged Navier-Stokes (RANS) National Combustion Code (NCC) Calculations Against Experimental Data for a Turbulent Reacting Flow

    NASA Technical Reports Server (NTRS)

    Iannetti, Anthony C.; Moder, Jeffery P.

    2010-01-01

    Developing physics-based tools to aid in reducing harmful combustion emissions, like Nitrogen Oxides (NOx), Carbon Monoxide (CO), Unburnt Hydrocarbons (UHC s), and Sulfur Dioxides (SOx), is an important goal of aeronautics research at NASA. As part of that effort, NASA Glenn Research Center is performing a detailed assessment and validation of an in-house combustion CFD code known as the National Combustion Code (NCC) for turbulent reacting flows. To assess the current capabilities of NCC for simulating turbulent reacting flows with liquid jet fuel injection, a set of Single Swirler Lean Direct Injection (LDI) experiments performed at the University of Cincinnati was chosen as an initial validation data set. This Jet-A/air combustion experiment operates at a lean equivalence ratio of 0.75 at atmospheric pressure and has a 4 percent static pressure drop across the swirler. Detailed comparisons of NCC predictions for gas temperature and gaseous emissions (CO and NOx) against this experiment are considered in a previous work. The current paper is focused on detailed comparisons of the spray characteristics (radial profiles of drop size distribution and at several radial rakes) from NCC simulations against the experimental data. Comparisons against experimental data show that the use of the correlation for primary spray break-up implemented by Raju in the NCC produces most realistic results, but this result needs to be improved. Given the single or ten step chemical kinetics models, use of a spray size correlation gives similar, acceptable results

  7. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less

  8. Validation of psoriatic arthritis diagnoses in electronic medical records using natural language processing

    PubMed Central

    Cai, Tianxi; Karlson, Elizabeth W.

    2013-01-01

    Objectives To test whether data extracted from full text patient visit notes from an electronic medical record (EMR) would improve the classification of PsA compared to an algorithm based on codified data. Methods From the > 1,350,000 adults in a large academic EMR, all 2318 patients with a billing code for PsA were extracted and 550 were randomly selected for chart review and algorithm training. Using codified data and phrases extracted from narrative data using natural language processing, 31 predictors were extracted and three random forest algorithms trained using coded, narrative, and combined predictors. The receiver operator curve (ROC) was used to identify the optimal algorithm and a cut point was chosen to achieve the maximum sensitivity possible at a 90% positive predictive value (PPV). The algorithm was then used to classify the remaining 1768 charts and finally validated in a random sample of 300 cases predicted to have PsA. Results The PPV of a single PsA code was 57% (95%CI 55%–58%). Using a combination of coded data and NLP the random forest algorithm reached a PPV of 90% (95%CI 86%–93%) at sensitivity of 87% (95% CI 83% – 91%) in the training data. The PPV was 93% (95%CI 89%–96%) in the validation set. Adding NLP predictors to codified data increased the area under the ROC (p < 0.001). Conclusions Using NLP with text notes from electronic medical records improved the performance of the prediction algorithm significantly. Random forests were a useful tool to accurately classify psoriatic arthritis cases to enable epidemiological research. PMID:20701955

  9. National Variation in Costs and Mortality for Leukodystrophy Patients in U.S. Children’s Hospitals

    PubMed Central

    Brimley, Cameron J; Lopez, Jonathan; van Haren, Keith; Wilkes, Jacob; Sheng, Xiaoming; Nelson, Clint; Korgenski, E. Kent; Srivastava, Rajendu; Bonkowsky, Joshua L.

    2013-01-01

    Background Inherited leukodystrophies are progressive, debilitating neurological disorders with few treatment options and high mortality rates. Our objective was to determine national variation in the costs for leukodystrophy patients, and to evaluate differences in their care. Methods We developed an algorithm to identify inherited leukodystrophy patients in de-identified data sets using a recursive tree model based on ICD-9 CM diagnosis and procedure charge codes. Validation of the algorithm was performed independently at two institutions, and with data from the Pediatric Health Information System (PHIS) of 43 U.S. children’s hospitals, for a seven year time period, 2004–2010. Results A recursive algorithm was developed and validated, based on six ICD-9 codes and one procedure code, that had a sensitivity up to 90% (range 61–90%) and a specificity up to 99% (range 53–99%) for identifying inherited leukodystrophy patients. Inherited leukodystrophy patients comprise 0.4% of admissions to children’s hospitals and 0.7% of costs. Over seven years these patients required $411 million of hospital care, or $131,000/patient. Hospital costs for leukodystrophy patients varied at different institutions, ranging from 2 to 15 times more than the average pediatric patient. There was a statistically significant correlation between higher volume and increased cost efficiency. Increased mortality rates had an inverse relationship with increased patient volume that was not statistically significant. Conclusions We developed and validated a code-based algorithm for identifying leukodystrophy patients in deidentified national datasets. Leukodystrophy patients account for $59 million of costs yearly at children’s hospitals. Our data highlight potential to reduce unwarranted variability and improve patient care. PMID:23953952

  10. Effect of a Diffusion Zone on Fatigue Crack Propagation in Layered FGMs

    NASA Astrophysics Data System (ADS)

    Hauber, Brett; Brockman, Robert; Paulino, Glaucio

    2008-02-01

    Research into functionally graded materials (FGMs) has led to advances in our ability to analyze cracks. However, two prominent aspects remain relatively unexplored: 1) development and validation of modeling methods for fatigue crack propagation in FGMs, and 2) experimental validation of stress intensity models in engineered materials such as two phase monolithic and graded materials. This work addresses some of these problems for a limited set of conditions, material systems (e.g., Ti/TiB), and material gradients. Numerical analyses are conducted for single edge notch bend (SENB) specimens. Stress intensity factors are computed using the specialized finite element code I-Franc (Illinois Fracture Analysis Code), which is tailored for both homogeneous and graded materials, as well as Franc2DL and ABAQUS. Crack extension is considered by means of specified crack increments, together with fatigue evaluations to predict crack propagation life. Results will be used to determine linear material gradient parameters that are significant for prediction of fatigue crack growth behavior.

  11. Cars Thermometry in a Supersonic Combustor for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; Danehy, P. M.; Springer, R. R.; DeLoach, R.; Capriotti, D. P.

    2002-01-01

    An experiment has been conducted to acquire data for the validation of computational fluid dynamics (CFD) codes used in the design of supersonic combustors. The primary measurement technique is coherent anti-Stokes Raman spectroscopy (CARS), although surface pressures and temperatures have also been acquired. Modern- design- of-experiment techniques have been used to maximize the quality of the data set (for the given level of effort) and minimize systematic errors. The combustor consists of a diverging duct with single downstream- angled wall injector. Nominal entrance Mach number is 2 and enthalpy nominally corresponds to Mach 7 flight. Temperature maps are obtained at several planes in the flow for two cases: in one case the combustor is piloted by injecting fuel upstream of the main injector, the second is not. Boundary conditions and uncertainties are adequately characterized. Accurate CFD calculation of the flow will ultimately require accurate modeling of the chemical kinetics and turbulence-chemistry interactions as well as accurate modeling of the turbulent mixing

  12. Lessons Learned from a Cross-Model Validation between a Discrete Event Simulation Model and a Cohort State-Transition Model for Personalized Breast Cancer Treatment.

    PubMed

    Jahn, Beate; Rochau, Ursula; Kurzthaler, Christina; Paulden, Mike; Kluibenschädl, Martina; Arvandi, Marjan; Kühne, Felicitas; Goehler, Alexander; Krahn, Murray D; Siebert, Uwe

    2016-04-01

    Breast cancer is the most common malignancy among women in developed countries. We developed a model (the Oncotyrol breast cancer outcomes model) to evaluate the cost-effectiveness of a 21-gene assay when used in combination with Adjuvant! Online to support personalized decisions about the use of adjuvant chemotherapy. The goal of this study was to perform a cross-model validation. The Oncotyrol model evaluates the 21-gene assay by simulating a hypothetical cohort of 50-year-old women over a lifetime horizon using discrete event simulation. Primary model outcomes were life-years, quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs). We followed the International Society for Pharmacoeconomics and Outcomes Research-Society for Medical Decision Making (ISPOR-SMDM) best practice recommendations for validation and compared modeling results of the Oncotyrol model with the state-transition model developed by the Toronto Health Economics and Technology Assessment (THETA) Collaborative. Both models were populated with Canadian THETA model parameters, and outputs were compared. The differences between the models varied among the different validation end points. The smallest relative differences were in costs, and the greatest were in QALYs. All relative differences were less than 1.2%. The cost-effectiveness plane showed that small differences in the model structure can lead to different sets of nondominated test-treatment strategies with different efficiency frontiers. We faced several challenges: distinguishing between differences in outcomes due to different modeling techniques and initial coding errors, defining meaningful differences, and selecting measures and statistics for comparison (means, distributions, multivariate outcomes). Cross-model validation was crucial to identify and correct coding errors and to explain differences in model outcomes. In our comparison, small differences in either QALYs or costs led to changes in ICERs because of changes in the set of dominated and nondominated strategies. © The Author(s) 2015.

  13. Method for transition prediction in high-speed boundary layers, phase 2

    NASA Astrophysics Data System (ADS)

    Herbert, T.; Stuckert, G. K.; Lin, N.

    1993-09-01

    The parabolized stability equations (PSE) are a new and more reliable approach to analyzing the stability of streamwise varying flows such as boundary layers. This approach has been previously validated for idealized incompressible flows. Here, the PSE are formulated for highly compressible flows in general curvilinear coordinates to permit the analysis of high-speed boundary-layer flows over fairly general bodies. Vigorous numerical studies are carried out to study convergence and accuracy of the linear-stability code LSH and the linear/nonlinear PSE code PSH. Physical interfaces are set up to analyze the M = 8 boundary layer over a blunt cone calculated by using a thin-layer Navier Stokes (TNLS) code and the flow over a sharp cone at angle of attack calculated using the AFWAL parabolized Navier-Stokes (PNS) code. While stability and transition studies at high speeds are far from routine, the method developed here is the best tool available to research the physical processes in high-speed boundary layers.

  14. Validating a large geophysical data set: Experiences with satellite-derived cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie

    1992-01-01

    We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed and throughput for interactive graphical work, and problems relating to graphical interfaces.

  15. The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: A heuristic-systematic processing approach.

    PubMed

    Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August

    2018-07-01

    Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Computation of Thermally Perfect Properties of Oblique Shock Waves

    NASA Technical Reports Server (NTRS)

    Tatum, Kenneth E.

    1996-01-01

    A set of compressible flow relations describing flow properties across oblique shock waves, derived for a thermally perfect, calorically imperfect gas, is applied within the existing thermally perfect gas (TPG) computer code. The relations are based upon a value of cp expressed as a polynomial function of temperature. The updated code produces tables of compressible flow properties of oblique shock waves, as well as the original properties of normal shock waves and basic isentropic flow, in a format similar to the tables for normal shock waves found in NACA Rep. 1135. The code results are validated in both the calorically perfect and the calorically imperfect, thermally perfect temperature regimes through comparisons with the theoretical methods of NACA Rep. 1135, and with a state-of-the-art computational fluid dynamics code. The advantages of the TPG code for oblique shock wave calculations, as well as for the properties of isentropic flow and normal shock waves, are its ease of use, and its applicability to any type of gas (monatomic, diatomic, triatomic, polyatomic, or any specified mixture thereof).

  17. Aeroacoustic Codes For Rotor Harmonic and BVI Noise--CAMRAD.Mod1/HIRES

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Boyd, D. Douglas, Jr.; Burley, Casey L.; Jolly, J. Ralph, Jr.

    1996-01-01

    This paper presents a status of non-CFD aeroacoustic codes at NASA Langley Research Center for the prediction of helicopter harmonic and Blade-Vortex Interaction (BVI) noise. The prediction approach incorporates three primary components: CAMRAD.Mod1 - a substantially modified version of the performance/trim/wake code CAMRAD; HIRES - a high resolution blade loads post-processor; and WOPWOP - an acoustic code. The functional capabilities and physical modeling in CAMRAD.Mod1/HIRES will be summarized and illustrated. A new multi-core roll-up wake modeling approach is introduced and validated. Predictions of rotor wake and radiated noise are compared with to the results of the HART program, a model BO-105 windtunnel test at the DNW in Europe. Additional comparisons are made to results from a DNW test of a contemporary design four-bladed rotor, as well as from a Langley test of a single proprotor (tiltrotor) three-bladed model configuration. Because the method is shown to help eliminate the necessity of guesswork in setting code parameters between different rotor configurations, it should prove useful as a rotor noise design tool.

  18. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong, E-mail: yidong.xia@inl.gov; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.« less

  19. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  20. Verification of the predictive capabilities of the 4C code cryogenic circuit model

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi

    2014-01-01

    The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.

  1. Method for rapid high-frequency seismogram calculation

    NASA Astrophysics Data System (ADS)

    Stabile, Tony Alfredo; De Matteis, Raffaella; Zollo, Aldo

    2009-02-01

    We present a method for rapid, high-frequency seismogram calculation that makes use of an algorithm to automatically generate an exhaustive set of seismic phases with an appreciable amplitude on the seismogram. The method uses a hierarchical order of ray and seismic-phase generation, taking into account some existing constraints for ray paths and some physical constraints. To compute synthetic seismograms, the COMRAD code (from the Italian: "COdice Multifase per il RAy-tracing Dinamico") uses as core a dynamic ray-tracing code. To validate the code, we have computed in a layered medium synthetic seismograms using both COMRAD and a code that computes the complete wave field by the discrete wave number method. The seismograms are compared according to a time-frequency misfit criteria based on the continuous wavelet transform of the signals. Although the number of phases is considerably reduced by the selection criteria, the results show that the loss in amplitude on the whole seismogram is negligible. Moreover, the time for the computing of the synthetics using the COMRAD code (truncating the ray series at the 10th generation) is 3-4-fold less than that needed for the AXITRA code (up to a frequency of 25 Hz).

  2. Building the evidence on simulation validity: comparison of anesthesiologists' communication patterns in real and simulated cases.

    PubMed

    Weller, Jennifer; Henderson, Robert; Webster, Craig S; Shulruf, Boaz; Torrie, Jane; Davies, Elaine; Henderson, Kaylene; Frampton, Chris; Merry, Alan F

    2014-01-01

    Effective teamwork is important for patient safety, and verbal communication underpins many dimensions of teamwork. The validity of the simulated environment would be supported if it elicited similar verbal communications to the real setting. The authors hypothesized that anesthesiologists would exhibit similar verbal communication patterns in routine operating room (OR) cases and routine simulated cases. The authors further hypothesized that anesthesiologists would exhibit different communication patterns in routine cases (real or simulated) and simulated cases involving a crisis. Key communications relevant to teamwork were coded from video recordings of anesthesiologists in the OR, routine simulation and crisis simulation and percentages were compared. The authors recorded comparable videos of 20 anesthesiologists in the two simulations, and 17 of these anesthesiologists in the OR, generating 400 coded events in the OR, 683 in the routine simulation, and 1,419 in the crisis simulation. The authors found no significant differences in communication patterns in the OR and the routine simulations. The authors did find significant differences in communication patterns between the crisis simulation and both the OR and the routine simulations. Participants rated team communication as realistic and considered their communications occurred with a similar frequency in the simulations as in comparable cases in the OR. The similarity of teamwork-related communications elicited from anesthesiologists in simulated cases and the real setting lends support for the ecological validity of the simulation environment and its value in teamwork training. Different communication patterns and frequencies under the challenge of a crisis support the use of simulation to assess crisis management skills.

  3. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record

    PubMed Central

    Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W

    2011-01-01

    Background Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. Objective To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. Study design and methods We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100 000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100 000 records to assess its accuracy. Results Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100 000 randomly selected patients showed high sensitivity (range: 62.8–100.0%) and positive predictive value (range: 79.8–99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. Conclusion We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts. PMID:21613643

  4. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the Aeroelasticity Branch will examine other experimental efforts within the Subsonic Fixed Wing (SFW) program (such as testing of the NASA Common Research Model (CRM)) and other NASA programs and assess aeroelasticity issues and research topics.

  5. Validity of International Classification of Diseases (ICD) coding for dengue infections in hospital discharge records in Malaysia.

    PubMed

    Woon, Yuan-Liang; Lee, Keng-Yee; Mohd Anuar, Siti Fatimah Zahra; Goh, Pik-Pin; Lim, Teck-Onn

    2018-04-20

    Hospitalization due to dengue illness is an important measure of dengue morbidity. However, limited studies are based on administrative database because the validity of the diagnosis codes is unknown. We validated the International Classification of Diseases, 10th revision (ICD) diagnosis coding for dengue infections in the Malaysian Ministry of Health's (MOH) hospital discharge database. This validation study involves retrospective review of available hospital discharge records and hand-search medical records for years 2010 and 2013. We randomly selected 3219 hospital discharge records coded with dengue and non-dengue infections as their discharge diagnoses from the national hospital discharge database. We then randomly sampled 216 and 144 records for patients with and without codes for dengue respectively, in keeping with their relative frequency in the MOH database, for chart review. The ICD codes for dengue were validated against lab-based diagnostic standard (NS1 or IgM). The ICD-10-CM codes for dengue had a sensitivity of 94%, modest specificity of 83%, positive predictive value of 87% and negative predictive value 92%. These results were stable between 2010 and 2013. However, its specificity decreased substantially when patients manifested with bleeding or low platelet count. The diagnostic performance of the ICD codes for dengue in the MOH's hospital discharge database is adequate for use in health services research on dengue.

  6. DeepMoon: Convolutional neural network trainer to identify moon craters

    NASA Astrophysics Data System (ADS)

    Silburt, Ari; Zhu, Chenchong; Ali-Dib, Mohamad; Menou, Kristen; Jackson, Alan

    2018-05-01

    DeepMoon trains a convolutional neural net using data derived from a global digital elevation map (DEM) and catalog of craters to recognize craters on the Moon. The TensorFlow-based pipeline code is divided into three parts. The first generates a set images of the Moon randomly cropped from the DEM, with corresponding crater positions and radii. The second trains a convnet using this data, and the third validates the convnet's predictions.

  7. A model of the pre-assessment learning effects of assessment is operational in an undergraduate clinical context

    PubMed Central

    2012-01-01

    Background No validated model exists to explain the learning effects of assessment, a problem when designing and researching assessment for learning. We recently developed a model explaining the pre-assessment learning effects of summative assessment in a theory teaching context. The challenge now is to validate this model. The purpose of this study was to explore whether the model was operational in a clinical context as a first step in this process. Methods Given the complexity of the model, we adopted a qualitative approach. Data from in-depth interviews with eighteen medical students were subject to content analysis. We utilised a code book developed previously using grounded theory. During analysis, we remained alert to data that might not conform to the coding framework and open to the possibility of deploying inductive coding. Ethical clearance and informed consent were obtained. Results The three components of the model i.e., assessment factors, mechanism factors and learning effects were all evident in the clinical context. Associations between these components could all be explained by the model. Interaction with preceptors was identified as a new subcomponent of assessment factors. The model could explain the interrelationships of the three facets of this subcomponent i.e., regular accountability, personal consequences and emotional valence of the learning environment, with previously described components of the model. Conclusions The model could be utilized to analyse and explain observations in an assessment context different to that from which it was derived. In the clinical setting, the (negative) influence of preceptors on student learning was particularly prominent. In this setting, learning effects resulted not only from the high-stakes nature of summative assessment but also from personal stakes, e.g. for esteem and agency. The results suggest that to influence student learning, consequences should accrue from assessment that are immediate, concrete and substantial. The model could have utility as a planning or diagnostic tool in practice and research settings. PMID:22420839

  8. Fast high-energy X-ray imaging for Severe Accidents experiments on the future PLINIUS-2 platform

    NASA Astrophysics Data System (ADS)

    Berge, L.; Estre, N.; Tisseur, D.; Payan, E.; Eck, D.; Bouyer, V.; Cassiaut-Louis, N.; Journeau, C.; Tellier, R. Le; Pluyette, E.

    2018-01-01

    The future PLINIUS-2 platform of CEA Cadarache will be dedicated to the study of corium interactions in severe nuclear accidents, and will host innovative large-scale experiments. The Nuclear Measurement Laboratory of CEA Cadarache is in charge of real-time high-energy X-ray imaging set-ups, for the study of the corium-water and corium-sodium interaction, and of the corium stratification process. Imaging such large and high-density objects requires a 15 MeV linear electron accelerator coupled to a tungsten target creating a high-energy Bremsstrahlung X-ray flux, with corresponding dose rate about 100 Gy/min at 1 m. The signal is detected by phosphor screens coupled to high-framerate scientific CMOS cameras. The imaging set-up is established using an experimentally-validated home-made simulation software (MODHERATO). The code computes quantitative radiographic signals from the description of the source, object geometry and composition, detector, and geometrical configuration (magnification factor, etc.). It accounts for several noise sources (photonic and electronic noises, swank and readout noise), and for image blur due to the source spot-size and to the detector unsharpness. In a view to PLINIUS-2, the simulation has been improved to account for the scattered flux, which is expected to be significant. The paper presents the scattered flux calculation using the MCNP transport code, and its integration into the MODHERATO simulation. Then the validation of the improved simulation is presented, through confrontation to real measurement images taken on a small-scale equivalent set-up on the PLINIUS platform. Excellent agreement is achieved. This improved simulation is therefore being used to design the PLINIUS-2 imaging set-ups (source, detectors, cameras, etc.).

  9. Children's Behavior in the Postanesthesia Care Unit: The Development of the Child Behavior Coding System-PACU (CBCS-P)

    PubMed Central

    Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.

    2012-01-01

    Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123

  10. Magnetotelluric 3-D inversion—a review of two successful workshops on forward and inversion code testing and comparison

    NASA Astrophysics Data System (ADS)

    Miensopust, Marion P.; Queralt, Pilar; Jones, Alan G.; 3D MT modellers

    2013-06-01

    Over the last half decade the need for, and importance of, three-dimensional (3-D) modelling of magnetotelluric (MT) data have increased dramatically and various 3-D forward and inversion codes are in use and some have become commonly available. Comparison of forward responses and inversion results is an important step for code testing and validation prior to `production' use. The various codes use different mathematical approximations to the problem (finite differences, finite elements or integral equations), various orientations of the coordinate system, different sign conventions for the time dependence and various inversion strategies. Additionally, the obtained results are dependent on data analysis, selection and correction as well as on the chosen mesh, inversion parameters and regularization adopted, and therefore, a careful and knowledge-based use of the codes is essential. In 2008 and 2011, during two workshops at the Dublin Institute for Advanced Studies over 40 people from academia (scientists and students) and industry from around the world met to discuss 3-D MT inversion. These workshops brought together a mix of code writers as well as code users to assess the current status of 3-D modelling, to compare the results of different codes, and to discuss and think about future improvements and new aims in 3-D modelling. To test the numerical forward solutions, two 3-D models were designed to compare the responses obtained by different codes and/or users. Furthermore, inversion results of these two data sets and two additional data sets obtained from unknown models (secret models) were also compared. In this manuscript the test models and data sets are described (supplementary files are available) and comparisons of the results are shown. Details regarding the used data, forward and inversion parameters as well as computational power are summarized for each case, and the main discussion points of the workshops are reviewed. In general, the responses obtained from the various forward models are comfortingly very similar, and discrepancies are mainly related to the adopted mesh. For the inversions, the results show how the inversion outcome is affected by distortion and the choice of errors, as well as by the completeness of the data set. We hope that these compilations will become useful not only for those that were involved in the workshops, but for the entire MT community and also the broader geoscience community who may be interested in the resolution offered by MT.

  11. Interviews with children about their mental health problems: The congruence and validity of information that children report.

    PubMed

    Macleod, Emily; Woolford, June; Hobbs, Linda; Gross, Julien; Hayne, Harlene; Patterson, Tess

    2017-04-01

    To obtain a child's perspective during a mental health assessment, he or she is usually interviewed. Although researchers and clinicians generally agree that it is beneficial to hear a child's account of his or her presenting issues, there is debate about whether children provide reliable or valid clinical information during these interviews. Here, we examined whether children provide clinically and diagnostically relevant information in a clinical setting. In all, 31 children aged 5-12-years undergoing mental health assessments were asked open-ended questions about their presenting problems during a semi-structured interview. We coded the information that children reported to determine whether it was clinically relevant and could be used to diagnose their problems and to formulate and plan treatment. We also coded children's information to determine whether it was congruent with the children's presenting problems and their eventual clinical diagnoses. Most of the information that children reported was clinically relevant and included information about behaviour, affect, temporal details, thoughts, people, the environment, and the child's physical experiences. The information that children reported was also clinically valid; it was congruent with the problems that were discussed (84%) and also with the eventual diagnosis that the child received after a complete assessment (74%). We conclude that children can contribute relevant, clinically useful, valid information during clinical psychological assessments.

  12. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  13. Validation of a Monte Carlo code system for grid evaluation with interference effect on Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Abel; White, Graeme L.; Davidson, Rob

    2018-02-01

    Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.

  14. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  15. Do Over or Make Do? Climate Models as a Software Development Challenge (Invited)

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.

    2010-12-01

    We present the results of a comparative study of the software engineering culture and practices at four different earth system modeling centers: the UK Met Office Hadley Centre, the National Center for Atmospheric Research (NCAR), The Max-Planck-Institut für Meteorologie (MPI-M), and the Institut Pierre Simon Laplace (IPSL). The study investigated the software tools and techniques used at each center to assess their effectiveness. We also investigated how differences in the organizational structures, collaborative relationships, and technical infrastructures constrain the software development and affect software quality. Specific questions for the study included 1) Verification and Validation - What techniques are used to ensure that the code matches the scientists’ understanding of what it should do? How effective are these are at eliminating errors of correctness and errors of understanding? 2) Coordination - How are the contributions from across the modeling community coordinated? For coupled models, how are the differences in the priorities of different, overlapping communities of users addressed? 3) Division of responsibility - How are the responsibilities for coding, verification, and coordination distributed between different roles (scientific, engineering, support) in the organization? 4) Planning and release processes - How do modelers decide on priorities for model development, how do they decide which changes to tackle in a particular release of the model? 5) Debugging - How do scientists debug the models, what types of bugs do they find in their code, and how they find them? The results show that each center has evolved a set of model development practices that are tailored to their needs and organizational constraints. These practices emphasize scientific validity, but tend to neglect other software qualities, and all the centers struggle frequently with software problems. The testing processes are effective at removing software errors prior to release, but the code is hard to understand and hard to change. Software errors and model configuration problems are common during model development, and appear to have a serious impact on scientific productivity. These problems have grown dramatically in recent years with the growth in size and complexity of earth system models. Much of the success in obtaining valid simulations from the models depends on the scientists developing their own code, experimenting with alternatives, running frequent full system tests, and exploring patterns in the results. Blind application of generic software engineering processes is unlikely to work well. Instead, each center needs to lean how to balance the need for better coordination through a more disciplined approach with the freedom to explore, and the value of having scientists work directly with the code. This suggests that each center can learn a lot from comparing their practices with others, but that each might need to develop a different set of best practices.

  16. CFD validation needs for advanced concepts at Northrop Corporation

    NASA Technical Reports Server (NTRS)

    George, Michael W.

    1987-01-01

    Information is given in viewgraph form on the Computational Fluid Dynamics (CFD) Workshop held July 14 - 16, 1987. Topics covered include the philosophy of CFD validation, current validation efforts, the wing-body-tail Euler code, F-20 Euler simulated oil flow, and Euler Navier-Stokes code validation for 2D and 3D nozzle afterbody applications.

  17. TH-AB-BRA-07: PENELOPE-Based GPU-Accelerated Dose Calculation System Applied to MRI-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y; Mazur, T; Green, O

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: We first translated PENELOPE from FORTRAN to C++ and validated that the translation produced equivalent results. Then we adapted the C++ code to CUDA in a workflow optimized for GPU architecture. We expanded upon the original code to include voxelized transportmore » boosted by Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, we incorporated the vendor-provided MRIdian head model into the code. We performed a set of experimental measurements on MRIdian to examine the accuracy of both the head model and gPENELOPE, and then applied gPENELOPE toward independent validation of patient doses calculated by MRIdian’s KMC. Results: We achieve an average acceleration factor of 152 compared to the original single-thread FORTRAN implementation with the original accuracy preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen (1), mediastinum (1) and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: We developed a Monte Carlo simulation platform based on a GPU-accelerated version of PENELOPE. We validated that both the vendor provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  18. Administrative data measured surgical site infection probability within 30 days of surgery in elderly patients.

    PubMed

    van Walraven, Carl; Jackson, Timothy D; Daneman, Nick

    2016-09-01

    Elderly patients are inordinately affected by surgical site infections (SSIs). This study derived and internally validated a model that used routinely collected health administrative data to measure the probability of SSI in elderly patients within 30 days of surgery. All people exceeding 65 years undergoing surgery from two hospitals with known SSI status were linked to population-based administrative data sets in Ontario, Canada. We used bootstrap methods to create a multivariate model that used health administrative data to predict the probability of SSI. Of 3,436 patients, 177 (5.1%) had an SSI. The Elderly SSI Risk Model included six covariates: number of distinct physician fee codes within 30 days of surgery; presence or absence of a postdischarge prescription for an antibiotic; presence or absence of three diagnostic codes; and a previously derived score that gauged SSI risk based on procedure codes. The model was highly explanatory (Nagelkerke's R 2 , 0.458), strongly discriminative (C statistic, 0.918), and well calibrated (calibration slope, 1). Health administrative data can effectively determine 30-day risk of SSI risk in elderly patients undergoing a broad assortment of surgeries. External validation is necessary before this can be routinely used to monitor SSIs in the elderly. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Using ZIP Code Business Patterns Data to Measure Alcohol Outlet Density

    PubMed Central

    Matthews, Stephen A.; McCarthy, John D.; Rafail, Patrick S.

    2014-01-01

    Some states maintain high-quality alcohol outlet databases but quality varies by state, making comprehensive comparative analysis across US communities difficult. This study assesses the adequacy of using ZIP Code Business Patterns (ZIP-BP) data on establishments as estimates of the number of alcohol outlets by ZIP code. Specifically we compare ZIP-BP alcohol outlet counts with high-quality data from state and local records surrounding 44 college campus communities across 10 states plus the District of Columbia. Results show that a composite measure is strongly correlated (R=0.89) with counts of alcohol outlets generated from official state records. Analyses based on Generalized Estimation Equation models show that community and contextual factors have little impact on the concordance between the two data sources. There are also minimal inter-state differences in the level of agreement. To validate the use of a convenient secondary data set (ZIP-BP) it is important to have a high correlation with the more complex, high quality and more costly data product (i.e., datasets based on the acquisition and geocoding of state and local records) and then to clearly demonstrate that the discrepancy between the two to be unrelated to relevant explanatory variables. Thus our overall findings support the adequacy of using a conveniently available data set (ZIP-BP data) to estimate alcohol outlet densities in ZIP code areas in future research. PMID:21411233

  20. Using ZIP code business patterns data to measure alcohol outlet density.

    PubMed

    Matthews, Stephen A; McCarthy, John D; Rafail, Patrick S

    2011-07-01

    Some states maintain high-quality alcohol outlet databases but quality varies by state, making comprehensive comparative analysis across US communities difficult. This study assesses the adequacy of using ZIP Code Business Patterns (ZIP-BP) data on establishments as estimates of the number of alcohol outlets by ZIP code. Specifically we compare ZIP-BP alcohol outlet counts with high-quality data from state and local records surrounding 44 college campus communities across 10 states plus the District of Columbia. Results show that a composite measure is strongly correlated (R=0.89) with counts of alcohol outlets generated from official state records. Analyses based on Generalized Estimation Equation models show that community and contextual factors have little impact on the concordance between the two data sources. There are also minimal inter-state differences in the level of agreement. To validate the use of a convenient secondary data set (ZIP-BP) it is important to have a high correlation with the more complex, high quality and more costly data product (i.e., datasets based on the acquisition and geocoding of state and local records) and then to clearly demonstrate that the discrepancy between the two to be unrelated to relevant explanatory variables. Thus our overall findings support the adequacy of using a conveniently available data set (ZIP-BP data) to estimate alcohol outlet densities in ZIP code areas in future research. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. The Modified Cognitive Constructions Coding System: Reliability and Validity Assessments

    ERIC Educational Resources Information Center

    Moran, Galia S.; Diamond, Gary M.

    2006-01-01

    The cognitive constructions coding system (CCCS) was designed for coding client's expressed problem constructions on four dimensions: intrapersonal-interpersonal, internal-external, responsible-not responsible, and linear-circular. This study introduces, and examines the reliability and validity of, a modified version of the CCCS--a version that…

  2. HBOI Underwater Imaging and Communication Research - Phase 1

    DTIC Science & Technology

    2012-04-19

    validation of one-way pulse stretching radiative transfer code The objective was to develop and validate time-resolved radiative transfer models that...and validation of one-way pulse stretching radiative transfer code The models were subjected to a series of validation experiments over 12.5 meter...about the theoretical basis of the model together with validation results can be found in Dalgleish et al., (20 1 0). Forward scattering Mueller

  3. A promising method for identifying cross-cultural differences in patient perspective: the use of Internet-based focus groups for content validation of new Patient Reported Outcome assessments

    PubMed Central

    Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu

    2006-01-01

    Objectives This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). Methods The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures – all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. Findings The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. Conclusion The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires. PMID:16995935

  4. A promising method for identifying cross-cultural differences in patient perspective: the use of Internet-based focus groups for content validation of new patient reported outcome assessments.

    PubMed

    Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu

    2006-09-22

    This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures--all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires.

  5. Planar measurement of flow field parameters in a nonreacting supersonic combustor using laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Hollo, Steven D.; Mcdaniel, James C.

    1990-01-01

    A nonintrusive optical technique, laser-induced iodine fluorescence, has been used to obtain planar measurements of flow field parameters in the supersonic mixing flow field of a nonreacting supersonic combustor. The combustor design used in this work was configured with staged transverse sonic injection behind a rearward-facing step into a Mach 2.07 free stream. A set of spatially resolved measurements of temperature and injectant mole fraction has been generated. These measurements provide an extensive and accurate experimental data set required for the validation of computational fluid dynamic codes developed for the calculation of highly three-dimensional combustor flow fields.

  6. Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN

    NASA Technical Reports Server (NTRS)

    Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.

    1996-01-01

    A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.

  7. Rapid Prototyping: A Survey and Evaluation of Methodologies and Models

    DTIC Science & Technology

    1990-03-01

    possibility of program coding errors or design differences from the actual prototype the user validated. The method - ology should result in a production...behavior within the problem domain to be defned. "Each method has a different approach towards developing the set of symbols with which to define the...investigate prototyping as a viable alternative to the conventional method of software development. By the mid 1980’s, it was evi- dent that the traditional

  8. Act No. 1183, Civil Code, 23 December 1985.

    PubMed

    1987-01-01

    This document contains major provisions of Paraguay's 1985 Civil Code. The Code sets the marriage age at 16 for males and 14 for females and forbids marriage between natural and adopted relatives as well as between persons of the same sex. Bigamy is forbidden, as is marriage between a person and someone convicted of attempting or committing homicide against that person's spouse. Legal incompetents may not marry. Underage minors may marry with the permission of their parents or a court. Noted among the rights and duties of a married couple is the stipulation that husbands (or a judge) must give their approval before wives can legally run a business or work outside of the house or perform other specified activities. Valid marriages are dissolved only upon the death of one spouse. Remarriage in Paraguay after divorce abroad is forbidden. Spouses may legally separate after 2 years of married life (married minors must remain together until 2 years past the age of majority). Marital separation may be requested for adultery, attempted homicide by one spouse upon the other, dishonest or immoral conduct, extreme cruelty or abuse, voluntary or malicious abandonment, or the state of habitual intoxication or repeated use of drugs. Marriages can be annulled in specified cases. Marital property is subject to the community property regime, but each spouse may retain control of specified types of personal property. The Code appoints the husband as manager of community property within limits and reserves certain property to the wife. The Code permits premarital agreements about property management, and covers the dissolution and liquidation of the community property regime. The Code also sets provisions governing "de facto" unions; filiation for children born in and outside of wedlock; claims for parental recognition; kinship; and the duty to provide maintenance to spouses, children, and other relatives.

  9. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  10. PCC Framework for Program-Generators

    NASA Technical Reports Server (NTRS)

    Kong, Soonho; Choi, Wontae; Yi, Kwangkeun

    2009-01-01

    In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.

  11. NEAMS Update. Quarterly Report for October - December 2011.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, K.

    2012-02-16

    The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less

  12. Development, Verification and Validation of Enclosure Radiation Capabilities in the CHarring Ablator Response (CHAR) Code

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.

    2016-01-01

    With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinh, Nam; Athe, Paridhi; Jones, Christopher

    The Virtual Environment for Reactor Applications (VERA) code suite is assessed in terms of capability and credibility against the Consortium for Advanced Simulation of Light Water Reactors (CASL) Verification and Validation Plan (presented herein) in the context of three selected challenge problems: CRUD-Induced Power Shift (CIPS), Departure from Nucleate Boiling (DNB), and Pellet-Clad Interaction (PCI). Capability refers to evidence of required functionality for capturing phenomena of interest while capability refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements against which the VERA software is assessed. Thismore » approach, in turn, enables the focused assessment of only those capabilities relevant to the challenge problem. The evaluation of VERA against the challenge problem requirements represents a capability assessment. The mechanism for assessment is the Sandia-developed Predictive Capability Maturity Model (PCMM) that, for this assessment, evaluates VERA on 8 major criteria: (1) Representation and Geometric Fidelity, (2) Physics and Material Model Fidelity, (3) Software Quality Assurance and Engineering, (4) Code Verification, (5) Solution Verification, (6) Separate Effects Model Validation, (7) Integral Effects Model Validation, and (8) Uncertainty Quantification. For each attribute, a maturity score from zero to three is assigned in the context of each challenge problem. The evaluation of these eight elements constitutes the credibility assessment for VERA.« less

  14. Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study.

    PubMed

    Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald

    2015-03-21

    Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.

  15. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    NASA Astrophysics Data System (ADS)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.

    2017-06-01

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.

  16. Development of the Chronic Pain Coding System (CPCS) for Characterizing Patient-Clinician Discussions About Chronic Pain and Opioids

    PubMed Central

    Chen, Meng; Matthias, Marianne S.; Bell, Robert A.; Kravitz, Richard L.

    2016-01-01

    Objective. To describe the development and initial application of the Chronic Pain Coding System. Design. Secondary analysis of data from a randomized clinical trial. Setting. Six primary care clinics in northern California. Subjects. Forty-five primary care visits involving 33 clinicians and 45 patients on opioids for chronic noncancer pain. Methods. The authors developed a structured coding system to accurately and objectively characterize discussions about pain and opioids. Two coders applied the final system to visit transcripts. Intercoder agreement for major coding categories was moderate to substantial (kappa = 0.5–0.7). Mixed effects regression was used to test six hypotheses to assess preliminary construct validity. Results. Greater baseline pain interference was associated with longer pain discussions (P = 0.007) and more patient requests for clinician action (P = 0.02) but not more frequent negative patient evaluations of pain (P = 0.15). Greater clinician-reported visit difficulty was associated with more frequent disagreements with clinician recommendations (P = 0.003) and longer discussions of opioid risks (P = 0.049) but not more frequent requests for clinician action (P = 0.11). Rates of agreement versus disagreement with patient requests and clinician recommendations were similar for opioid-related and non-opioid–related utterances. Conclusions. This coding system appears to be a reliable and valid tool for characterizing patient-clinician communication about opioids and chronic pain during clinic visits. Objective data on how patients and clinicians discuss chronic pain and opioids are necessary to identify communication patterns and strategies for improving the quality and productivity of discussions about chronic pain that may lead to more effective pain management and reduce inappropriate opioid prescribing. PMID:26936453

  17. WEC3: Wave Energy Converter Code Comparison Project: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien

    This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less

  18. Effects of cavity dimensions, boundary layer, and temperature on cavity noise with emphasis on benchmark data to validate computational aeroacoustic codes

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.; Mendoza, J.

    1995-01-01

    This report documents the results of an experimental investigation on the response of a cavity to external flowfields. The primary objective of this research was to acquire benchmark of data on the effects of cavity length, width, depth, upstream boundary layer, and flow temperature on cavity noise. These data were to be used for validation of computational aeroacoustic (CAA) codes on cavity noise. To achieve this objective, a systematic set of acoustic and flow measurements were made for subsonic turbulent flows approaching a cavity. These measurements were conducted in the research facilities of the Georgia Tech research institute. Two cavity models were designed, one for heated flow and another for unheated flow studies. Both models were designed such that the cavity length (L) could easily be varied while holding fixed the depth (D) and width (W) dimensions of the cavity. Depth and width blocks were manufactured so that these dimensions could be varied as well. A wall jet issuing from a rectangular nozzle was used to simulate flows over the cavity.

  19. Systematic Review of Methods in Low-Consensus Fields: Supporting Commensuration through `Construct-Centered Methods Aggregation' in the Case of Climate Change Vulnerability Research.

    PubMed

    Delaney, Aogán; Tamás, Peter A; Crane, Todd A; Chesterman, Sabrina

    2016-01-01

    There is increasing interest in using systematic review to synthesize evidence on the social and environmental effects of and adaptations to climate change. Use of systematic review for evidence in this field is complicated by the heterogeneity of methods used and by uneven reporting. In order to facilitate synthesis of results and design of subsequent research a method, construct-centered methods aggregation, was designed to 1) provide a transparent, valid and reliable description of research methods, 2) support comparability of primary studies and 3) contribute to a shared empirical basis for improving research practice. Rather than taking research reports at face value, research designs are reviewed through inductive analysis. This involves bottom-up identification of constructs, definitions and operationalizations; assessment of concepts' commensurability through comparison of definitions; identification of theoretical frameworks through patterns of construct use; and integration of transparently reported and valid operationalizations into ideal-type research frameworks. Through the integration of reliable bottom-up inductive coding from operationalizations and top-down coding driven from stated theory with expert interpretation, construct-centered methods aggregation enabled both resolution of heterogeneity within identically named constructs and merging of differently labeled but identical constructs. These two processes allowed transparent, rigorous and contextually sensitive synthesis of the research presented in an uneven set of reports undertaken in a heterogenous field. If adopted more broadly, construct-centered methods aggregation may contribute to the emergence of a valid, empirically-grounded description of methods used in primary research. These descriptions may function as a set of expectations that improves the transparency of reporting and as an evolving comprehensive framework that supports both interpretation of existing and design of future research.

  20. Systematic Review of Methods in Low-Consensus Fields: Supporting Commensuration through `Construct-Centered Methods Aggregation’ in the Case of Climate Change Vulnerability Research

    PubMed Central

    Crane, Todd A.; Chesterman, Sabrina

    2016-01-01

    There is increasing interest in using systematic review to synthesize evidence on the social and environmental effects of and adaptations to climate change. Use of systematic review for evidence in this field is complicated by the heterogeneity of methods used and by uneven reporting. In order to facilitate synthesis of results and design of subsequent research a method, construct-centered methods aggregation, was designed to 1) provide a transparent, valid and reliable description of research methods, 2) support comparability of primary studies and 3) contribute to a shared empirical basis for improving research practice. Rather than taking research reports at face value, research designs are reviewed through inductive analysis. This involves bottom-up identification of constructs, definitions and operationalizations; assessment of concepts’ commensurability through comparison of definitions; identification of theoretical frameworks through patterns of construct use; and integration of transparently reported and valid operationalizations into ideal-type research frameworks. Through the integration of reliable bottom-up inductive coding from operationalizations and top-down coding driven from stated theory with expert interpretation, construct-centered methods aggregation enabled both resolution of heterogeneity within identically named constructs and merging of differently labeled but identical constructs. These two processes allowed transparent, rigorous and contextually sensitive synthesis of the research presented in an uneven set of reports undertaken in a heterogenous field. If adopted more broadly, construct-centered methods aggregation may contribute to the emergence of a valid, empirically-grounded description of methods used in primary research. These descriptions may function as a set of expectations that improves the transparency of reporting and as an evolving comprehensive framework that supports both interpretation of existing and design of future research. PMID:26901409

  1. DEVELOPMENT AND TESTING OF FAULT-DIAGNOSIS ALGORITHMS FOR REACTOR PLANT SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grelle, Austin L.; Park, Young S.; Vilim, Richard B.

    Argonne National Laboratory is further developing fault diagnosis algorithms for use by the operator of a nuclear plant to aid in improved monitoring of overall plant condition and performance. The objective is better management of plant upsets through more timely, informed decisions on control actions with the ultimate goal of improved plant safety, production, and cost management. Integration of these algorithms with visual aids for operators is taking place through a collaboration under the concept of an operator advisory system. This is a software entity whose purpose is to manage and distill the enormous amount of information an operator mustmore » process to understand the plant state, particularly in off-normal situations, and how the state trajectory will unfold in time. The fault diagnosis algorithms were exhaustively tested using computer simulations of twenty different faults introduced into the chemical and volume control system (CVCS) of a pressurized water reactor (PWR). The algorithms are unique in that each new application to a facility requires providing only the piping and instrumentation diagram (PID) and no other plant-specific information; a subject-matter expert is not needed to install and maintain each instance of an application. The testing approach followed accepted procedures for verifying and validating software. It was shown that the code satisfies its functional requirement which is to accept sensor information, identify process variable trends based on this sensor information, and then to return an accurate diagnosis based on chains of rules related to these trends. The validation and verification exercise made use of GPASS, a one-dimensional systems code, for simulating CVCS operation. Plant components were failed and the code generated the resulting plant response. Parametric studies with respect to the severity of the fault, the richness of the plant sensor set, and the accuracy of sensors were performed as part of the validation exercise. The background and overview of the software will be presented to give an overview of the approach. Following, the verification and validation effort using the GPASS code for simulation of plant transients including a sensitivity study on important parameters will be presented« less

  2. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  3. Prevalence of transcription promoters within archaeal operons and coding sequences

    PubMed Central

    Koide, Tie; Reiss, David J; Bare, J Christopher; Pang, Wyming Lee; Facciotti, Marc T; Schmid, Amy K; Pan, Min; Marzolf, Bruz; Van, Phu T; Lo, Fang-Yin; Pratap, Abhishek; Deutsch, Eric W; Peterson, Amelia; Martin, Dan; Baliga, Nitin S

    2009-01-01

    Despite the knowledge of complex prokaryotic-transcription mechanisms, generalized rules, such as the simplified organization of genes into operons with well-defined promoters and terminators, have had a significant role in systems analysis of regulatory logic in both bacteria and archaea. Here, we have investigated the prevalence of alternate regulatory mechanisms through genome-wide characterization of transcript structures of ∼64% of all genes, including putative non-coding RNAs in Halobacterium salinarum NRC-1. Our integrative analysis of transcriptome dynamics and protein–DNA interaction data sets showed widespread environment-dependent modulation of operon architectures, transcription initiation and termination inside coding sequences, and extensive overlap in 3′ ends of transcripts for many convergently transcribed genes. A significant fraction of these alternate transcriptional events correlate to binding locations of 11 transcription factors and regulators (TFs) inside operons and annotated genes—events usually considered spurious or non-functional. Using experimental validation, we illustrate the prevalence of overlapping genomic signals in archaeal transcription, casting doubt on the general perception of rigid boundaries between coding sequences and regulatory elements. PMID:19536208

  4. Prevalence of transcription promoters within archaeal operons and coding sequences.

    PubMed

    Koide, Tie; Reiss, David J; Bare, J Christopher; Pang, Wyming Lee; Facciotti, Marc T; Schmid, Amy K; Pan, Min; Marzolf, Bruz; Van, Phu T; Lo, Fang-Yin; Pratap, Abhishek; Deutsch, Eric W; Peterson, Amelia; Martin, Dan; Baliga, Nitin S

    2009-01-01

    Despite the knowledge of complex prokaryotic-transcription mechanisms, generalized rules, such as the simplified organization of genes into operons with well-defined promoters and terminators, have had a significant role in systems analysis of regulatory logic in both bacteria and archaea. Here, we have investigated the prevalence of alternate regulatory mechanisms through genome-wide characterization of transcript structures of approximately 64% of all genes, including putative non-coding RNAs in Halobacterium salinarum NRC-1. Our integrative analysis of transcriptome dynamics and protein-DNA interaction data sets showed widespread environment-dependent modulation of operon architectures, transcription initiation and termination inside coding sequences, and extensive overlap in 3' ends of transcripts for many convergently transcribed genes. A significant fraction of these alternate transcriptional events correlate to binding locations of 11 transcription factors and regulators (TFs) inside operons and annotated genes-events usually considered spurious or non-functional. Using experimental validation, we illustrate the prevalence of overlapping genomic signals in archaeal transcription, casting doubt on the general perception of rigid boundaries between coding sequences and regulatory elements.

  5. High-speed reacting flow simulation using USA-series codes

    NASA Astrophysics Data System (ADS)

    Chakravarthy, S. R.; Palaniswamy, S.

    In this paper, the finite-rate chemistry (FRC) formulation for the USA-series of codes and three sets of validations are presented. USA-series computational fluid dynamics (CFD) codes are based on Unified Solution Algorithms including explicity and implicit formulations, factorization and relaxation approaches, time marching and space marching methodolgies, etc., in order to be able to solve a very wide class of CDF problems using a single framework. Euler or Navier-Stokes equations are solved using a finite-volume treatment with upwind Total Variation Diminishing discretization for the inviscid terms. Perfect and real gas options are available including equilibrium and nonequilibrium chemistry. This capability has been widely used to study various problems including Space Shuttle exhaust plumes, National Aerospace Plane (NASP) designs, etc. (1) Numerical solutions are presented showing the full range of possible solutions to steady detonation wave problems. (2) Comparison between the solution obtained by the USA code and Generalized Kinetics Analysis Program (GKAP) is shown for supersonic combustion in a duct. (3) Simulation of combustion in a supersonic shear layer is shown to have reasonable agreement with experimental observations.

  6. Seeing the Invisible: Embedding Tests in Code That Cannot be Modified

    NASA Technical Reports Server (NTRS)

    O'Malley, Owen; Mansouri-Samani, Masoud; Mehlitz, Peter; Penix, John

    2005-01-01

    The difficulty of characterizing and observing valid software behavior during testing can be very difficult in flight systems. To address this issue, we evaluated several approaches to increasing test observability on the Shuttle Abort Flight Management (SAFM) system. To increase test observability, we added probes into the running system to evaluate the internal state and analyze test data. To minimize the impact of the instrumentation and reduce manual effort, we used Aspect-Oriented Programming (AOP) tools to instrument the source code. We developed and elicited a spectrum of properties, from generic to application specific properties, to be monitored via the instrumentation. To evaluate additional approaches, SAFM was ported to Linux, enabling the use of gcov for measuring test coverage, Valgrind for looking for memory usage errors, and libraries for finding non-normal floating point values. An in-house C++ source code scanning tool was also used to identify violations of SAFM coding standards, and other potentially problematic C++ constructs. Using these approaches with the existing test data sets, we were able to verify several important properties, confirm several problems and identify some previously unidentified issues.

  7. RETRANO3 benchmarks for Beaver Valley plant transients and FSAR analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaumont, E.T.; Feltus, M.A.

    1993-01-01

    Any best-estimate code (e.g., RETRANO3) results must be validated against plant data and final safety analysis report (FSAR) predictions. The need for two independent means of benchmarking is necessary to ensure that the results were not biased toward a particular data set and to have a certain degree of accuracy. The code results need to be compared with previous results and show improvements over previous code results. Ideally, the two best means of benchmarking a thermal hydraulics code are comparing results from previous versions of the same code along with actual plant data. This paper describes RETRAN03 benchmarks against RETRAN02more » results, actual plant data, and FSAR predictions. RETRAN03, the Electric Power Research Institute's latest version of the RETRAN thermal-hydraulic analysis codes, offers several upgrades over its predecessor, RETRAN02 Mod5. RETRAN03 can use either implicit or semi-implicit numerics, whereas RETRAN02 Mod5 uses only semi-implicit numerics. Another major upgrade deals with slip model options. RETRAN03 added several new models, including a five-equation model for more accurate modeling of two-phase flow. RETPAN02 Mod5 should give similar but slightly more conservative results than RETRAN03 when executed with RETRAN02 Mod5 options.« less

  8. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  9. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Clinical code set engineering for reusing EHR data for research: A review.

    PubMed

    Williams, Richard; Kontopantelis, Evangelos; Buchan, Iain; Peek, Niels

    2017-06-01

    The construction of reliable, reusable clinical code sets is essential when re-using Electronic Health Record (EHR) data for research. Yet code set definitions are rarely transparent and their sharing is almost non-existent. There is a lack of methodological standards for the management (construction, sharing, revision and reuse) of clinical code sets which needs to be addressed to ensure the reliability and credibility of studies which use code sets. To review methodological literature on the management of sets of clinical codes used in research on clinical databases and to provide a list of best practice recommendations for future studies and software tools. We performed an exhaustive search for methodological papers about clinical code set engineering for re-using EHR data in research. This was supplemented with papers identified by snowball sampling. In addition, a list of e-phenotyping systems was constructed by merging references from several systematic reviews on this topic, and the processes adopted by those systems for code set management was reviewed. Thirty methodological papers were reviewed. Common approaches included: creating an initial list of synonyms for the condition of interest (n=20); making use of the hierarchical nature of coding terminologies during searching (n=23); reviewing sets with clinician input (n=20); and reusing and updating an existing code set (n=20). Several open source software tools (n=3) were discovered. There is a need for software tools that enable users to easily and quickly create, revise, extend, review and share code sets and we provide a list of recommendations for their design and implementation. Research re-using EHR data could be improved through the further development, more widespread use and routine reporting of the methods by which clinical codes were selected. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  11. 45 CFR 162.1002 - Medical data code sets.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2012-10-01 2012-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...

  12. 45 CFR 162.1002 - Medical data code sets.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2014-10-01 2014-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...

  13. 45 CFR 162.1002 - Medical data code sets.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2013-10-01 2013-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...

  14. 45 CFR 162.1002 - Medical data code sets.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2011-10-01 2011-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...

  15. 45 CFR 162.1002 - Medical data code sets.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2010-10-01 2010-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...

  16. Further Validation of a CFD Code for Calculating the Performance of Two-Stage Light Gas Guns

    NASA Technical Reports Server (NTRS)

    Bogdanoff, David W.

    2017-01-01

    Earlier validations of a higher-order Godunov code for modeling the performance of two-stage light gas guns are reviewed. These validation comparisons were made between code predictions and experimental data from the NASA Ames 1.5" and 0.28" guns and covered muzzle velocities of 6.5 to 7.2 km/s. In the present report, five more series of code validation comparisons involving experimental data from the Ames 0.22" (1.28" pump tube diameter), 0.28", 0.50", 1.00" and 1.50" guns are presented. The total muzzle velocity range of the validation data presented herein is 3 to 11.3 km/s. The agreement between the experimental data and CFD results is judged to be very good. Muzzle velocities were predicted within 0.35 km/s for 74% of the cases studied with maximum differences being 0.5 km/s and for 4 out of 50 cases, 0.5 - 0.7 km/s.

  17. Design, development and first validation of a transcoding system from ICD-9-CM to ICD-10 in the IT.DRG Italian project.

    PubMed

    Della Mea, Vincenzo; Vuattolo, Omar; Frattura, Lucilla; Munari, Flavia; Verdini, Eleonora; Zanier, Loris; Arcangeli, Laura; Carle, Flavia

    2015-01-01

    In Italy, ICD-9-CM is currently used for coding health conditions at hospital discharge, but ICD-10 is being introduced thanks to the IT-DRG Project. In this project, one needed component is a set of transcoding rules and associated tools for easing coders work in the transition. The present paper illustrates design and development of those transcoding rules, and their preliminary testing on a subset of Italian hospital discharge data.

  18. Verification and Validation Plan for Three-Dimensional Probability of Incapacitation Methodology for Masonry Structures (3DPIMMS)

    DTIC Science & Technology

    2011-01-01

    all panels of a test were recorded, it was reduced into text format and then input into the code. B. Current Development The capabilities...due to fragmentation. Any or all of these models can be activated for a particular lethality assessment. Incapacitation criteria of different times...defined for all fragments represented in the file. Only the fragment material density needs to be set by the user. 3DPIMMS accounts for some statistical

  19. Analysis of NASA Common Research Model Dynamic Data

    NASA Technical Reports Server (NTRS)

    Balakrishna, S.; Acheson, Michael J.

    2011-01-01

    Recent NASA Common Research Model (CRM) tests at the Langley National Transonic Facility (NTF) and Ames 11-foot Transonic Wind Tunnel (11-foot TWT) have generated an experimental database for CFD code validation. The database consists of force and moment, surface pressures and wideband wing-root dynamic strain/wing Kulite data from continuous sweep pitch polars. The dynamic data sets, acquired at 12,800 Hz sampling rate, are analyzed in this study to evaluate CRM wing buffet onset and potential CRM wing flow separation.

  20. Use of the ETA-1 reactor for the validation of the multi-group APOLLO2-MORET 5 code and the Monte Carlo continuous energy MORET 5 code

    NASA Astrophysics Data System (ADS)

    Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.

    2014-06-01

    The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.

  1. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  2. Identification of Long Bone Fractures in Radiology Reports Using Natural Language Processing to Support Healthcare Quality Improvement

    PubMed Central

    Masino, Aaron J.; Casper, T. Charles; Dean, Jonathan M.; Bell, Jamie; Enriquez, Rene; Deakyne, Sara; Chamberlain, James M.; Alpern, Elizabeth R.

    2016-01-01

    Summary Background Important information to support healthcare quality improvement is often recorded in free text documents such as radiology reports. Natural language processing (NLP) methods may help extract this information, but these methods have rarely been applied outside the research laboratories where they were developed. Objective To implement and validate NLP tools to identify long bone fractures for pediatric emergency medicine quality improvement. Methods Using freely available statistical software packages, we implemented NLP methods to identify long bone fractures from radiology reports. A sample of 1,000 radiology reports was used to construct three candidate classification models. A test set of 500 reports was used to validate the model performance. Blinded manual review of radiology reports by two independent physicians provided the reference standard. Each radiology report was segmented and word stem and bigram features were constructed. Common English “stop words” and rare features were excluded. We used 10-fold cross-validation to select optimal configuration parameters for each model. Accuracy, recall, precision and the F1 score were calculated. The final model was compared to the use of diagnosis codes for the identification of patients with long bone fractures. Results There were 329 unique word stems and 344 bigrams in the training documents. A support vector machine classifier with Gaussian kernel performed best on the test set with accuracy=0.958, recall=0.969, precision=0.940, and F1 score=0.954. Optimal parameters for this model were cost=4 and gamma=0.005. The three classification models that we tested all performed better than diagnosis codes in terms of accuracy, precision, and F1 score (diagnosis code accuracy=0.932, recall=0.960, precision=0.896, and F1 score=0.927). Conclusions NLP methods using a corpus of 1,000 training documents accurately identified acute long bone fractures from radiology reports. Strategic use of straightforward NLP methods, implemented with freely available software, offers quality improvement teams new opportunities to extract information from narrative documents. PMID:27826610

  3. Long Non-Coding RNAs Responsive to Salt and Boron Stress in the Hyper-Arid Lluteño Maize from Atacama Desert.

    PubMed

    Huanca-Mamani, Wilson; Arias-Carrasco, Raúl; Cárdenas-Ninasivincha, Steffany; Rojas-Herrera, Marcelo; Sepúlveda-Hermosilla, Gonzalo; Caris-Maldonado, José Carlos; Bastías, Elizabeth; Maracaja-Coutinho, Vinicius

    2018-03-20

    Long non-coding RNAs (lncRNAs) have been defined as transcripts longer than 200 nucleotides, which lack significant protein coding potential and possess critical roles in diverse cellular processes. Long non-coding RNAs have recently been functionally characterized in plant stress-response mechanisms. In the present study, we perform a comprehensive identification of lncRNAs in response to combined stress induced by salinity and excess of boron in the Lluteño maize, a tolerant maize landrace from Atacama Desert, Chile. We use deep RNA sequencing to identify a set of 48,345 different lncRNAs, of which 28,012 (58.1%) are conserved with other maize (B73, Mo17 or Palomero), with the remaining 41.9% belonging to potentially Lluteño exclusive lncRNA transcripts. According to B73 maize reference genome sequence, most Lluteño lncRNAs correspond to intergenic transcripts. Interestingly, Lluteño lncRNAs presents an unusual overall higher expression compared to protein coding genes under exposure to stressed conditions. In total, we identified 1710 putatively responsive to the combined stressed conditions of salt and boron exposure. We also identified a set of 848 stress responsive potential trans natural antisense transcripts ( trans -NAT) lncRNAs, which seems to be regulating genes associated with regulation of transcription, response to stress, response to abiotic stimulus and participating of the nicotianamine metabolic process. Reverse transcription-quantitative PCR (RT-qPCR) experiments were performed in a subset of lncRNAs, validating their existence and expression patterns. Our results suggest that a diverse set of maize lncRNAs from leaves and roots is responsive to combined salt and boron stress, being the first effort to identify lncRNAs from a maize landrace adapted to extreme conditions such as the Atacama Desert. The information generated is a starting point to understand the genomic adaptabilities suffered by this maize to surpass this extremely stressed environment.

  4. Long Non-Coding RNAs Responsive to Salt and Boron Stress in the Hyper-Arid Lluteño Maize from Atacama Desert

    PubMed Central

    Huanca-Mamani, Wilson; Arias-Carrasco, Raúl; Cárdenas-Ninasivincha, Steffany; Rojas-Herrera, Marcelo; Sepúlveda-Hermosilla, Gonzalo; Caris-Maldonado, José Carlos; Bastías, Elizabeth; Maracaja-Coutinho, Vinicius

    2018-01-01

    Long non-coding RNAs (lncRNAs) have been defined as transcripts longer than 200 nucleotides, which lack significant protein coding potential and possess critical roles in diverse cellular processes. Long non-coding RNAs have recently been functionally characterized in plant stress–response mechanisms. In the present study, we perform a comprehensive identification of lncRNAs in response to combined stress induced by salinity and excess of boron in the Lluteño maize, a tolerant maize landrace from Atacama Desert, Chile. We use deep RNA sequencing to identify a set of 48,345 different lncRNAs, of which 28,012 (58.1%) are conserved with other maize (B73, Mo17 or Palomero), with the remaining 41.9% belonging to potentially Lluteño exclusive lncRNA transcripts. According to B73 maize reference genome sequence, most Lluteño lncRNAs correspond to intergenic transcripts. Interestingly, Lluteño lncRNAs presents an unusual overall higher expression compared to protein coding genes under exposure to stressed conditions. In total, we identified 1710 putatively responsive to the combined stressed conditions of salt and boron exposure. We also identified a set of 848 stress responsive potential trans natural antisense transcripts (trans-NAT) lncRNAs, which seems to be regulating genes associated with regulation of transcription, response to stress, response to abiotic stimulus and participating of the nicotianamine metabolic process. Reverse transcription-quantitative PCR (RT-qPCR) experiments were performed in a subset of lncRNAs, validating their existence and expression patterns. Our results suggest that a diverse set of maize lncRNAs from leaves and roots is responsive to combined salt and boron stress, being the first effort to identify lncRNAs from a maize landrace adapted to extreme conditions such as the Atacama Desert. The information generated is a starting point to understand the genomic adaptabilities suffered by this maize to surpass this extremely stressed environment. PMID:29558449

  5. Trellis coding with multidimensional QAM signal sets

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J.

    1993-01-01

    Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.

  6. Organizational Effectiveness Information System (OEIS) User’s Manual

    DTIC Science & Technology

    1986-09-01

    SUBJECT CODES B-l C. LISTING OF VALID RESOURCE SYSTEM CODES C-l »TflerÄ*w»fi*%f*fc**v.nft; ^’.A/.V. A y.A/.AAA«•.*-A/. AAV ...the valid codes used la the Implementation and Design System. MACOM 01 COE 02 DARCOM 03 EUSA 04 FORSCOM 05 HSC 06 HQDA 07 INSCOM 08 MDW 09

  7. Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liu, Nan-Suey

    2005-01-01

    The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.

  8. Anonymization of DICOM Electronic Medical Records for Radiation Therapy

    PubMed Central

    Newhauser, Wayne; Jones, Timothy; Swerdloff, Stuart; Newhauser, Warren; Cilia, Mark; Carver, Robert; Halloran, Andy; Zhang, Rui

    2014-01-01

    Electronic medical records (EMR) and treatment plans are used in research on patient outcomes and radiation effects. In many situations researchers must remove protected health information (PHI) from EMRs. The literature contains several studies describing the anonymization of generic Digital Imaging and Communication in Medicine (DICOM) files and DICOM image sets but no publications were found that discuss the anonymization of DICOM radiation therapy plans, a key component of an EMR in a cancer clinic. In addition to this we were unable to find a commercial software tool that met the minimum requirements for anonymization and preservation of data integrity for radiation therapy research. The purpose of this study was to develop a prototype software code to meet the requirements for the anonymization of radiation therapy treatment plans and to develop a way to validate that code and demonstrate that it properly anonymized treatment plans and preserved data integrity. We extended an open-source code to process all relevant PHI and to allow for the automatic anonymization of multiple EMRs. The prototype code successfully anonymized multiple treatment plans in less than 1 minute per patient. We also tested commercial optical character recognition (OCR) algorithms for the detection of burned-in text on the images, but they were unable to reliably recognize text. In addition, we developed and tested an image filtering algorithm that allowed us to isolate and redact alpha-numeric text from a test radiograph. Validation tests verified that PHI was anonymized and data integrity, such as the relationship between DICOM unique identifiers (UID) was preserved. PMID:25147130

  9. Anonymization of DICOM electronic medical records for radiation therapy.

    PubMed

    Newhauser, Wayne; Jones, Timothy; Swerdloff, Stuart; Newhauser, Warren; Cilia, Mark; Carver, Robert; Halloran, Andy; Zhang, Rui

    2014-10-01

    Electronic medical records (EMR) and treatment plans are used in research on patient outcomes and radiation effects. In many situations researchers must remove protected health information (PHI) from EMRs. The literature contains several studies describing the anonymization of generic Digital Imaging and Communication in Medicine (DICOM) files and DICOM image sets but no publications were found that discuss the anonymization of DICOM radiation therapy plans, a key component of an EMR in a cancer clinic. In addition to this we were unable to find a commercial software tool that met the minimum requirements for anonymization and preservation of data integrity for radiation therapy research. The purpose of this study was to develop a prototype software code to meet the requirements for the anonymization of radiation therapy treatment plans and to develop a way to validate that code and demonstrate that it properly anonymized treatment plans and preserved data integrity. We extended an open-source code to process all relevant PHI and to allow for the automatic anonymization of multiple EMRs. The prototype code successfully anonymized multiple treatment plans in less than 1min/patient. We also tested commercial optical character recognition (OCR) algorithms for the detection of burned-in text on the images, but they were unable to reliably recognize text. In addition, we developed and tested an image filtering algorithm that allowed us to isolate and redact alpha-numeric text from a test radiograph. Validation tests verified that PHI was anonymized and data integrity, such as the relationship between DICOM unique identifiers (UID) was preserved. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Dynamic gene expression response to altered gravity in human T cells.

    PubMed

    Thiel, Cora S; Hauschild, Swantje; Huge, Andreas; Tauber, Svantje; Lauber, Beatrice A; Polzer, Jennifer; Paulsen, Katrin; Lier, Hartwin; Engelmann, Frank; Schmitz, Burkhard; Schütte, Andreas; Layer, Liliana E; Ullrich, Oliver

    2017-07-12

    We investigated the dynamics of immediate and initial gene expression response to different gravitational environments in human Jurkat T lymphocytic cells and compared expression profiles to identify potential gravity-regulated genes and adaptation processes. We used the Affymetrix GeneChip® Human Transcriptome Array 2.0 containing 44,699 protein coding genes and 22,829 non-protein coding genes and performed the experiments during a parabolic flight and a suborbital ballistic rocket mission to cross-validate gravity-regulated gene expression through independent research platforms and different sets of control experiments to exclude other factors than alteration of gravity. We found that gene expression in human T cells rapidly responded to altered gravity in the time frame of 20 s and 5 min. The initial response to microgravity involved mostly regulatory RNAs. We identified three gravity-regulated genes which could be cross-validated in both completely independent experiment missions: ATP6V1A/D, a vacuolar H + -ATPase (V-ATPase) responsible for acidification during bone resorption, IGHD3-3/IGHD3-10, diversity genes of the immunoglobulin heavy-chain locus participating in V(D)J recombination, and LINC00837, a long intergenic non-protein coding RNA. Due to the extensive and rapid alteration of gene expression associated with regulatory RNAs, we conclude that human cells are equipped with a robust and efficient adaptation potential when challenged with altered gravitational environments.

  11. Validity of Principal Diagnoses in Discharge Summaries and ICD-10 Coding Assessments Based on National Health Data of Thailand.

    PubMed

    Sukanya, Chongthawonsatid

    2017-10-01

    This study examined the validity of the principal diagnoses on discharge summaries and coding assessments. Data were collected from the National Health Security Office (NHSO) of Thailand in 2015. In total, 118,971 medical records were audited. The sample was drawn from government hospitals and private hospitals covered by the Universal Coverage Scheme in Thailand. Hospitals and cases were selected using NHSO criteria. The validity of the principal diagnoses listed in the "Summary and Coding Assessment" forms was established by comparing data from the discharge summaries with data obtained from medical record reviews, and additionally, by comparing data from the coding assessments with data in the computerized ICD (the data base used for reimbursement-purposes). The summary assessments had low sensitivities (7.3%-37.9%), high specificities (97.2%-99.8%), low positive predictive values (9.2%-60.7%), and high negative predictive values (95.9%-99.3%). The coding assessments had low sensitivities (31.1%-69.4%), high specificities (99.0%-99.9%), moderate positive predictive values (43.8%-89.0%), and high negative predictive values (97.3%-99.5%). The discharge summaries and codings often contained mistakes, particularly the categories "Endocrine, nutritional, and metabolic diseases", "Symptoms, signs, and abnormal clinical and laboratory findings not elsewhere classified", "Factors influencing health status and contact with health services", and "Injury, poisoning, and certain other consequences of external causes". The validity of the principal diagnoses on the summary and coding assessment forms was found to be low. The training of physicians and coders must be strengthened to improve the validity of discharge summaries and codings.

  12. Measuring the quality of motivational interviewing in primary health care encounters: The development and validation of the motivational interviewing assessment scale (MIAS).

    PubMed

    Campiñez Navarro, Manuel; Pérula de Torres, Luis Ángel; Bosch Fontcuberta, Josep M; Barragán Brun, Nieves; Arbonies Ortiz, Juan Carlos; Novo Rodríguez, Jesús Manuel; Bóveda Fontán, Julia; Martín Alvarez, Remedios; Prados Castillejo, Jose Antonio; Rivas Doutreleau, Gabriela Renée; Domingo Peña, Carmen; Castro Moreno, Jaime Jesús; Romero Rodríguez, Esperanza María

    2016-09-01

    Motivational interviewing (MI) is a collaborative, goal-oriented method to help patients change behaviour. Tools that are often used to measure MI are the motivational interviewing skills code' (MISC), the 'motivational interviewing treatment integrity' (MITI) and the 'behaviour change counselling index' (BECCI). The first two instruments have not been designed to be used in primary healthcare (PHC) settings. The BECCI actually is time-consuming. The motivational interviewing assessment scale (MIAS, 'EVEM' in Spanish) was developed to measure MI in PHC encounters as an alternative to the previous instruments. To validate MIAS as an instrument to assess the quality of MI in PHC settings. (a) Sixteen experts in MI participated in the design, face and consensus validity, using a Delphi-type methodology. (b) 27 PHC centres located in Spain. four experts in MI tested its psychometric properties with 332 video recordings coming from the Dislip-EM study (consultations provided by 37 practitioners). dimensionality, internal consistency, reliability (intra-class correlation coefficient-ICC), sensitivity to change and convergent validity with the BECCI scale. A 14-item scale was obtained after the validation process. Factor analysis: two factors explained 76.6% of the total variance. Internal consistency, α = 0.99. Reliability: intra-rater ICC = 0.96; inter-rater ICC = 0.97. Sensitivity to change: means before and after training were 23.63 versus 38.57 (P < 0.001). Spearman's coefficient between the MIAS and the BECCI scale was 0.98 (P < 0.001). The MIAS is a consistent and reliable instrument to assess the use of MI in PHC settings. [Box: see text].

  13. Code Sharing and Collaboration: Experiences from the Scientist's Expert Assistant Project and their Relevance to the Virtual Observatory

    NASA Technical Reports Server (NTRS)

    Jones, Jeremy; Grosvenor, Sandy; Wolf, Karl; Li, Connie; Koratkar, Anuradha; Powers, Edward I. (Technical Monitor)

    2001-01-01

    In the Virtual Observatory (VO), software tools will perform the functions that have traditionally been performed by physical observatories and their instruments. These tools will not be adjuncts to VO functionality but will make up the very core of the VO. Consequently, the tradition of observatory and system independent tools serving a small user base is not valid for the VO. For the VO to succeed, we must improve software collaboration and code sharing between projects and groups. A significant goal of the Scientist's Expert Assistant (SEA) project has been promoting effective collaboration and code sharing between groups. During the past three years, the SEA project has been developing prototypes for new observation planning software tools and strategies. Initially funded by the Next Generation Space Telescope, parts of the SEA code have since been adopted by the Space Telescope Science Institute. SEA has also supplied code for SOFIA, the SIRTF planning tools, and the JSky Open Source Java library. The potential benefits of sharing code are clear. The recipient gains functionality for considerably less cost. The provider gains additional developers working with their code. If enough users groups adopt a set of common code and tools, defacto standards can emerge (as demonstrated by the success of the FITS standard). Code sharing also raises a number of challenges related to the management of the code. In this talk, we will review our experiences with SEA - both successes and failures - and offer some lessons learned that may promote further successes in collaboration and re-use.

  14. Code Sharing and Collaboration: Experiences From the Scientist's Expert Assistant Project and Their Relevance to the Virtual Observatory

    NASA Technical Reports Server (NTRS)

    Korathkar, Anuradha; Grosvenor, Sandy; Jones, Jeremy; Li, Connie; Mackey, Jennifer; Neher, Ken; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    In the Virtual Observatory (VO), software tools will perform the functions that have traditionally been performed by physical observatories and their instruments. These tools will not be adjuncts to VO functionality but will make up the very core of the VO. Consequently, the tradition of observatory and system independent tools serving a small user base is not valid for the VO. For the VO to succeed, we must improve software collaboration and code sharing between projects and groups. A significant goal of the Scientist's Expert Assistant (SEA) project has been promoting effective collaboration and code sharing among groups. During the past three years, the SEA project has been developing prototypes for new observation planning software tools and strategies. Initially funded by the Next Generation Space Telescope, parts of the SEA code have since been adopted by the Space Telescope Science Institute. SEA has also supplied code for the SIRTF (Space Infrared Telescope Facility) planning tools, and the JSky Open Source Java library. The potential benefits of sharing code are clear. The recipient gains functionality for considerably less cost. The provider gains additional developers working with their code. If enough users groups adopt a set of common code and tools, de facto standards can emerge (as demonstrated by the success of the FITS standard). Code sharing also raises a number of challenges related to the management of the code. In this talk, we will review our experiences with SEA--both successes and failures, and offer some lessons learned that might promote further successes in collaboration and re-use.

  15. Extension, validation and application of the NASCAP code

    NASA Technical Reports Server (NTRS)

    Katz, I.; Cassidy, J. J., III; Mandell, M. J.; Schnuelle, G. W.; Steen, P. G.; Parks, D. E.; Rotenberg, M.; Alexander, J. H.

    1979-01-01

    Numerous extensions were made in the NASCAP code. They fall into three categories: a greater range of definable objects, a more sophisticated computational model, and simplified code structure and usage. An important validation of NASCAP was performed using a new two dimensional computer code (TWOD). An interactive code (MATCHG) was written to compare material parameter inputs with charging results. The first major application of NASCAP was performed on the SCATHA satellite. Shadowing and charging calculation were completed. NASCAP was installed at the Air Force Geophysics Laboratory, where researchers plan to use it to interpret SCATHA data.

  16. Spacetime Replication of Quantum Information Using (2 , 3) Quantum Secret Sharing and Teleportation

    NASA Astrophysics Data System (ADS)

    Wu, Yadong; Khalid, Abdullah; Davijani, Masoud; Sanders, Barry

    The aim of this work is to construct a protocol to replicate quantum information in any valid configuration of causal diamonds and assess resources required to physically realize spacetime replication. We present a set of codes to replicate quantum information along with a scheme to realize these codes using continuous-variable quantum optics. We use our proposed experimental realizations to determine upper bounds on the quantum and classical resources required to simulate spacetime replication. For four causal diamonds, our implementation scheme is more efficient than the one proposed previously. Our codes are designed using a decomposition algorithm for complete directed graphs, (2 , 3) quantum secret sharing, quantum teleportation and entanglement swapping. These results show the simulation of spacetime replication of quantum information is feasible with existing experimental methods. Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center (NSF Grant PHY-1125565) with support of the Gordon and Betty Moore Foundation (GBMF-2644).

  17. Validity of the Child Facial Coding System for the Assessment of Acute Pain in Children With Cerebral Palsy.

    PubMed

    Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N

    2016-04-01

    The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.

  18. A Comprehensive Validation Approach Using The RAVEN Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less

  19. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.; Lee, Chi-Miag (Technical Monitor)

    2001-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this paper, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery for space launch vehicle propulsion systems.

  20. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugfer, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  1. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid beat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  2. Experimental Validation of an Ion Beam Optics Code with a Visualized Ion Thruster

    NASA Astrophysics Data System (ADS)

    Nakayama, Yoshinori; Nakano, Masakatsu

    For validation of an ion beam optics code, the behavior of ion beam optics was experimentally observed and evaluated with a two-dimensional visualized ion thruster (VIT). Since the observed beam focus positions, sheath positions and measured ion beam currents were in good agreement with the numerical results, it was confirmed that the numerical model of this code was appropriated. In addition, it was also confirmed that the beam focus position was moved on center axis of grid hole according to the applied grid potentials, which differs from conventional understanding/assumption. The VIT operations may be useful not only for the validation of ion beam optics codes but also for the fundamental and intuitive understanding of the Child Law Sheath theory.

  3. Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.

    2005-01-01

    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.

  4. Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) processing speed scores as measures of noncredible responding: The third generation of embedded performance validity indicators.

    PubMed

    Erdodi, Laszlo A; Abeare, Christopher A; Lichtenstein, Jonathan D; Tyson, Bradley T; Kucharski, Brittany; Zuccato, Brandon G; Roth, Robert M

    2017-02-01

    Research suggests that select processing speed measures can also serve as embedded validity indicators (EVIs). The present study examined the diagnostic utility of Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests as EVIs in a mixed clinical sample of 205 patients medically referred for neuropsychological assessment (53.3% female, mean age = 45.1). Classification accuracy was calculated against 3 composite measures of performance validity as criterion variables. A PSI ≤79 produced a good combination of sensitivity (.23-.56) and specificity (.92-.98). A Coding scaled score ≤5 resulted in good specificity (.94-1.00), but low and variable sensitivity (.04-.28). A Symbol Search scaled score ≤6 achieved a good balance between sensitivity (.38-.64) and specificity (.88-.93). A Coding-Symbol Search scaled score difference ≥5 produced adequate specificity (.89-.91) but consistently low sensitivity (.08-.12). A 2-tailed cutoff on the Coding/Symbol Search raw score ratio (≤1.41 or ≥3.57) produced acceptable specificity (.87-.93), but low sensitivity (.15-.24). Failing ≥2 of these EVIs produced variable specificity (.81-.93) and sensitivity (.31-.59). Failing ≥3 of these EVIs stabilized specificity (.89-.94) at a small cost to sensitivity (.23-.53). Results suggest that processing speed based EVIs have the potential to provide a cost-effective and expedient method for evaluating the validity of cognitive data. Given their generally low and variable sensitivity, however, they should not be used in isolation to determine the credibility of a given response set. They also produced unacceptably high rates of false positive errors in patients with moderate-to-severe head injury. Combining evidence from multiple EVIs has the potential to improve overall classification accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Validation of the "HAMP" mapping algorithm: a tool for long-term trauma research studies in the conversion of AIS 2005 to AIS 98.

    PubMed

    Adams, Derk; Schreuder, Astrid B; Salottolo, Kristin; Settell, April; Goss, J Richard

    2011-07-01

    There are significant changes in the abbreviated injury scale (AIS) 2005 system, which make it impractical to compare patients coded in AIS version 98 with patients coded in AIS version 2005. Harborview Medical Center created a computer algorithm "Harborview AIS Mapping Program (HAMP)" to automatically convert AIS 2005 to AIS 98 injury codes. The mapping was validated using 6 months of double-coded patient injury records from a Level I Trauma Center. HAMP was used to determine how closely individual AIS and injury severity scores (ISS) were converted from AIS 2005 to AIS 98 versions. The kappa statistic was used to measure the agreement between manually determined codes and HAMP-derived codes. Seven hundred forty-nine patient records were used for validation. For the conversion of AIS codes, the measure of agreement between HAMP and manually determined codes was [kappa] = 0.84 (95% confidence interval, 0.82-0.86). The algorithm errors were smaller in magnitude than the manually determined coding errors. For the conversion of ISS, the agreement between HAMP versus manually determined ISS was [kappa] = 0.81 (95% confidence interval, 0.78-0.84). The HAMP algorithm successfully converted injuries coded in AIS 2005 to AIS 98. This algorithm will be useful when comparing trauma patient clinical data across populations coded in different versions, especially for longitudinal studies.

  6. Development of a patient-specific dosimetry estimation system in nuclear medicine examination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, H. H.; Dong, S. L.; Yang, H. J.

    2011-07-01

    The purpose of this study is to develop a patient-specific dosimetry estimation system in nuclear medicine examination using a SimSET-based Monte Carlo code. We added a dose deposition routine to store the deposited energy of the photons during their flights in SimSET and developed a user-friendly interface for reading PET and CT images. Dose calculated on ORNL phantom was used to validate the accuracy of this system. The S values for {sup 99m}Tc, {sup 18}F and {sup 131}I obtained by the system were compared to those from the MCNP4C code and OLINDA. The ratios of S values computed by thismore » system to those obtained with OLINDA for various organs were ranged from 0.93 to 1.18, which are comparable to that obtained from MCNP4C code (0.94 to 1.20). The average ratios of S value were 0.99{+-}0.04, 1.03{+-}0.05, and 1.00{+-}0.07 for isotopes {sup 131}I, {sup 18}F, and {sup 99m}Tc, respectively. The simulation time of SimSET was two times faster than MCNP4C's for various isotopes. A 3D dose calculation was also performed on a patient data set with PET/CT examination using this system. Results from the patient data showed that the estimated S values using this system differed slightly from those of OLINDA for ORNL phantom. In conclusion, this system can generate patient-specific dose distribution and display the isodose curves on top of the anatomic structure through a friendly graphic user interface. It may also provide a useful tool to establish an appropriate dose-reduction strategy to patients in nuclear medicine environments. (authors)« less

  7. A systematic review of validated methods for identifying acute respiratory failure using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  8. An unbiased Hessian representation for Monte Carlo PDFs.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  9. The BSM-AI project: SUSY-AI-generalizing LHC limits on supersymmetry with machine learning

    NASA Astrophysics Data System (ADS)

    Caron, Sascha; Kim, Jong Soo; Rolbiecki, Krzysztof; de Austri, Roberto Ruiz; Stienen, Bob

    2017-04-01

    A key research question at the Large Hadron Collider is the test of models of new physics. Testing if a particular parameter set of such a model is excluded by LHC data is a challenge: it requires time consuming generation of scattering events, simulation of the detector response, event reconstruction, cross section calculations and analysis code to test against several hundred signal regions defined by the ATLAS and CMS experiments. In the BSM-AI project we approach this challenge with a new idea. A machine learning tool is devised to predict within a fraction of a millisecond if a model is excluded or not directly from the model parameters. A first example is SUSY-AI, trained on the phenomenological supersymmetric standard model (pMSSM). About 300, 000 pMSSM model sets - each tested against 200 signal regions by ATLAS - have been used to train and validate SUSY-AI. The code is currently able to reproduce the ATLAS exclusion regions in 19 dimensions with an accuracy of at least 93%. It has been validated further within the constrained MSSM and the minimal natural supersymmetric model, again showing high accuracy. SUSY-AI and its future BSM derivatives will help to solve the problem of recasting LHC results for any model of new physics. SUSY-AI can be downloaded from http://susyai.hepforge.org/. An on-line interface to the program for quick testing purposes can be found at http://www.susy-ai.org/.

  10. Moderate sensitivity and high specificity of emergency department administrative data for transient ischemic attacks.

    PubMed

    Yu, Amy Y X; Quan, Hude; McRae, Andrew; Wagner, Gabrielle O; Hill, Michael D; Coutts, Shelagh B

    2017-09-18

    Validation of administrative data case definitions is key for accurate passive surveillance of disease. Transient ischemic attack (TIA) is a condition primarily managed in the emergency department. However, prior validation studies have focused on data after inpatient hospitalization. We aimed to determine the validity of the Canadian 10th International Classification of Diseases (ICD-10-CA) codes for TIA in the national ambulatory administrative database. We performed a diagnostic accuracy study of four ICD-10-CA case definition algorithms for TIA in the emergency department setting. The study population was obtained from two ongoing studies on the diagnosis of TIA and minor stroke versus stroke mimic using serum biomarkers and neuroimaging. Two reference standards were used 1) the emergency department clinical diagnosis determined by chart abstractors and 2) the 90-day final diagnosis, both obtained by stroke neurologists, to calculate the sensitivity, specificity, positive and negative predictive values (PPV and NPV) of the ICD-10-CA algorithms for TIA. Among 417 patients, emergency department adjudication showed 163 (39.1%) TIA, 155 (37.2%) ischemic strokes, and 99 (23.7%) stroke mimics. The most restrictive algorithm, defined as a TIA code in the main position had the lowest sensitivity (36.8%), but highest specificity (92.5%) and PPV (76.0%). The most inclusive algorithm, defined as a TIA code in any position with and without query prefix had the highest sensitivity (63.8%), but lowest specificity (81.5%) and PPV (68.9%). Sensitivity, specificity, PPV, and NPV were overall lower when using the 90-day diagnosis as reference standard. Emergency department administrative data reflect diagnosis of suspected TIA with high specificity, but underestimate the burden of disease. Future studies are necessary to understand the reasons for the low to moderate sensitivity.

  11. Structure and software tools of AIDA.

    PubMed

    Duisterhout, J S; Franken, B; Witte, F

    1987-01-01

    AIDA consists of a set of software tools to allow for fast development and easy-to-maintain Medical Information Systems. AIDA supports all aspects of such a system both during development and operation. It contains tools to build and maintain forms for interactive data entry and on-line input validation, a database management system including a data dictionary and a set of run-time routines for database access, and routines for querying the database and output formatting. Unlike an application generator, the user of AIDA may select parts of the tools to fulfill his needs and program other subsystems not developed with AIDA. The AIDA software uses as host language the ANSI-standard programming language MUMPS, an interpreted language embedded in an integrated database and programming environment. This greatly facilitates the portability of AIDA applications. The database facilities supported by AIDA are based on a relational data model. This data model is built on top of the MUMPS database, the so-called global structure. This relational model overcomes the restrictions of the global structure regarding string length. The global structure is especially powerful for sorting purposes. Using MUMPS as a host language allows the user an easy interface between user-defined data validation checks or other user-defined code and the AIDA tools. AIDA has been designed primarily for prototyping and for the construction of Medical Information Systems in a research environment which requires a flexible approach. The prototyping facility of AIDA operates terminal independent and is even to a great extent multi-lingual. Most of these features are table-driven; this allows on-line changes in the use of terminal type and language, but also causes overhead. AIDA has a set of optimizing tools by which it is possible to build a faster, but (of course) less flexible code from these table definitions. By separating the AIDA software in a source and a run-time version, one is able to write implementation-specific code which can be selected and loaded by a special source loader, being part of the AIDA software. This feature is also accessible for maintaining software on different sites and on different installations.

  12. Moral judgment reloaded: a moral dilemma validation study

    PubMed Central

    Christensen, Julia F.; Flexas, Albert; Calabrese, Margareta; Gut, Nadine K.; Gomila, Antoni

    2014-01-01

    We propose a revised set of moral dilemmas for studies on moral judgment. We selected a total of 46 moral dilemmas available in the literature and fine-tuned them in terms of four conceptual factors (Personal Force, Benefit Recipient, Evitability, and Intention) and methodological aspects of the dilemma formulation (word count, expression style, question formats) that have been shown to influence moral judgment. Second, we obtained normative codings of arousal and valence for each dilemma showing that emotional arousal in response to moral dilemmas depends crucially on the factors Personal Force, Benefit Recipient, and Intentionality. Third, we validated the dilemma set confirming that people's moral judgment is sensitive to all four conceptual factors, and to their interactions. Results are discussed in the context of this field of research, outlining also the relevance of our RT effects for the Dual Process account of moral judgment. Finally, we suggest tentative theoretical avenues for future testing, particularly stressing the importance of the factor Intentionality in moral judgment. Additionally, due to the importance of cross-cultural studies in the quest for universals in human moral cognition, we provide the new set dilemmas in six languages (English, French, German, Spanish, Catalan, and Danish). The norming values provided here refer to the Spanish dilemma set. PMID:25071621

  13. Verification and validation of RADMODL Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transportmore » of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.« less

  14. Efficient simulation of voxelized phantom in GATE with embedded SimSET multiple photon history generator.

    PubMed

    Lin, Hsin-Hon; Chuang, Keh-Shih; Lin, Yi-Hsing; Ni, Yu-Ching; Wu, Jay; Jan, Meei-Ling

    2014-10-21

    GEANT4 Application for Tomographic Emission (GATE) is a powerful Monte Carlo simulator that combines the advantages of the general-purpose GEANT4 simulation code and the specific software tool implementations dedicated to emission tomography. However, the detailed physical modelling of GEANT4 is highly computationally demanding, especially when tracking particles through voxelized phantoms. To circumvent the relatively slow simulation of voxelized phantoms in GATE, another efficient Monte Carlo code can be used to simulate photon interactions and transport inside a voxelized phantom. The simulation system for emission tomography (SimSET), a dedicated Monte Carlo code for PET/SPECT systems, is well-known for its efficiency in simulation of voxel-based objects. An efficient Monte Carlo workflow integrating GATE and SimSET for simulating pinhole SPECT has been proposed to improve voxelized phantom simulation. Although the workflow achieves a desirable increase in speed, it sacrifices the ability to simulate decaying radioactive sources such as non-pure positron emitters or multiple emission isotopes with complex decay schemes and lacks the modelling of time-dependent processes due to the inherent limitations of the SimSET photon history generator (PHG). Moreover, a large volume of disk storage is needed to store the huge temporal photon history file produced by SimSET that must be transported to GATE. In this work, we developed a multiple photon emission history generator (MPHG) based on SimSET/PHG to support a majority of the medically important positron emitters. We incorporated the new generator codes inside GATE to improve the simulation efficiency of voxelized phantoms in GATE, while eliminating the need for the temporal photon history file. The validation of this new code based on a MicroPET R4 system was conducted for (124)I and (18)F with mouse-like and rat-like phantoms. Comparison of GATE/MPHG with GATE/GEANT4 indicated there is a slight difference in energy spectra for energy below 50 keV due to the lack of x-ray simulation from (124)I decay in the new code. The spatial resolution, scatter fraction and count rate performance are in good agreement between the two codes. For the case studies of (18)F-NaF ((124)I-IAZG) using MOBY phantom with 1  ×  1 × 1 mm(3) voxel sizes, the results show that GATE/MPHG can achieve acceleration factors of approximately 3.1 × (4.5 ×), 6.5 × (10.7 ×) and 9.5 × (31.0 ×) compared with GATE using the regular navigation method, the compressed voxel method and the parameterized tracking technique, respectively. In conclusion, the implementation of MPHG in GATE allows for improved efficiency of voxelized phantom simulations and is suitable for studying clinical and preclinical imaging.

  15. WEC-SIM Validation Testing Plan FY14 Q4.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruehl, Kelley Michelle

    2016-02-01

    The WEC-Sim project is currently on track, having met both the SNL and NREL FY14 Milestones, as shown in Table 1 and Table 2. This is also reflected in the Gantt chart uploaded to the WEC-Sim SharePoint site in the FY14 Q4 Deliverables folder. The work completed in FY14 includes code verification through code-to-code comparison (FY14 Q1 and Q2), preliminary code validation through comparison to experimental data (FY14 Q2 and Q3), presentation and publication of the WEC-Sim project at OMAE 2014 [1], [2], [3] and GMREC/METS 2014 [4] (FY14 Q3), WEC-Sim code development and public open-source release (FY14 Q3), andmore » development of a preliminary WEC-Sim validation test plan (FY14 Q4). This report presents the preliminary Validation Testing Plan developed in FY14 Q4. The validation test effort started in FY14 Q4 and will go on through FY15. Thus far the team has developed a device selection method, selected a device, and placed a contract with the testing facility, established several collaborations including industry contacts, and have working ideas on the testing details such as scaling, device design, and test conditions.« less

  16. A validated case definition for chronic rhinosinusitis in administrative data: a Canadian perspective.

    PubMed

    Rudmik, Luke; Xu, Yuan; Kukec, Edward; Liu, Mingfu; Dean, Stafford; Quan, Hude

    2016-11-01

    Pharmacoepidemiological research using administrative databases has become increasingly popular for chronic rhinosinusitis (CRS); however, without a validated case definition the cohort evaluated may be inaccurate resulting in biased and incorrect outcomes. The objective of this study was to develop and validate a generalizable administrative database case definition for CRS using International Classification of Diseases, 9th edition (ICD-9)-coded claims. A random sample of 100 patients with a guideline-based diagnosis of CRS and 100 control patients were selected and then linked to a Canadian physician claims database from March 31, 2010, to March 31, 2015. The proportion of CRS ICD-9-coded claims (473.x and 471.x) for each of these 200 patients were reviewed and the validity of 7 different ICD-9-based coding algorithms was evaluated. The CRS case definition of ≥2 claims with a CRS ICD-9 code (471.x or 473.x) within 2 years of the reference case provides a balanced validity with a sensitivity of 77% and specificity of 79%. Applying this CRS case definition to the claims database produced a CRS cohort of 51,000 patients with characteristics that were consistent with published demographics and rates of comorbid asthma, allergic rhinitis, and depression. This study has validated several coding algorithms; based on the results a case definition of ≥2 physician claims of CRS (ICD-9 of 471.x or 473.x) within 2 years provides an optimal level of validity. Future studies will need to validate this administrative case definition from different health system perspectives and using larger retrospective chart reviews from multiple providers. © 2016 ARS-AAOA, LLC.

  17. Transcriptome discovery in non-model wild fish species for the development of quantitative transcript abundance assays

    USGS Publications Warehouse

    Hahn, Cassidy M.; Iwanowicz, Luke R.; Cornman, Robert S.; Mazik, Patricia M.; Blazer, Vicki S.

    2016-01-01

    Environmental studies increasingly identify the presence of both contaminants of emerging concern (CECs) and legacy contaminants in aquatic environments; however, the biological effects of these compounds on resident fishes remain largely unknown. High throughput methodologies were employed to establish partial transcriptomes for three wild-caught, non-model fish species; smallmouth bass (Micropterus dolomieu), white sucker (Catostomus commersonii) and brown bullhead (Ameiurus nebulosus). Sequences from these transcriptome databases were utilized in the development of a custom nCounter CodeSet that allowed for direct multiplexed measurement of 50 transcript abundance endpoints in liver tissue. Sequence information was also utilized in the development of quantitative real-time PCR (qPCR) primers. Cross-species hybridization allowed the smallmouth bass nCounter CodeSet to be used for quantitative transcript abundance analysis of an additional non-model species, largemouth bass (Micropterus salmoides). We validated the nCounter analysis data system with qPCR for a subset of genes and confirmed concordant results. Changes in transcript abundance biomarkers between sexes and seasons were evaluated to provide baseline data on transcript modulation for each species of interest.

  18. Cluster Analysis of Rat Olfactory Bulb Responses to Diverse Odorants

    PubMed Central

    Falasconi, Matteo; Leon, Michael; Johnson, Brett A.; Marco, Santiago

    2012-01-01

    In an effort to deepen our understanding of mammalian olfactory coding, we have used an objective method to analyze a large set of odorant-evoked activity maps collected systematically across the rat olfactory bulb to determine whether such an approach could identify specific glomerular regions that are activated by related odorants. To that end, we combined fuzzy c-means clustering methods with a novel validity approach based on cluster stability to evaluate the significance of the fuzzy partitions on a data set of glomerular layer responses to a large diverse group of odorants. Our results confirm the existence of glomerular response clusters to similar odorants. They further indicate a partial hierarchical chemotopic organization wherein larger glomerular regions can be subdivided into smaller areas that are rather specific in their responses to particular functional groups of odorants. These clusters bear many similarities to, as well as some differences from, response domains previously proposed for the glomerular layer of the bulb. These data also provide additional support for the concept of an identity code in the mammalian olfactory system. PMID:22459165

  19. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  20. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE PAGES

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...

    2017-03-23

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  1. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atamturktur, Sez; Unal, Cetin; Hemez, Francois

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed frameworkmore » is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this framework, the project team has focused on optimizing resource allocation for improving numerical models through further code development and experimentation. Related to further code development, we have developed a code prioritization index (CPI) for coupled numerical models. CPI is implemented to effectively improve the predictive capability of the coupled model by increasing the sophistication of constituent codes. In relation to designing new experiments, we investigated the information gained by the addition of each new experiment used for calibration and bias correction of a simulation model. Additionally, the variability of ‘information gain’ through the design domain has been investigated in order to identify the experiment settings where maximum information gain occurs and thus guide the experimenters in the selection of the experiment settings. This idea was extended to evaluate the information gain from each experiment can be improved by intelligently selecting the experiments, leading to the development of the Batch Sequential Design (BSD) technique. Additionally, we evaluated the importance of sufficiently exploring the domain of applicability in experiment-based validation of high-consequence modeling and simulation by developing a new metric to quantify coverage. This metric has also been incorporated into the design of new experiments. Finally, we have proposed a data-aware calibration approach for the calibration of numerical models. This new method considers the complexity of a numerical model (the number of parameters to be calibrated, parameter uncertainty, and form of the model) and seeks to identify the number of experiments necessary to calibrate the model based on the level of sophistication of the physics. The final component in the project team’s work to improve model calibration and validation methods is the incorporation of robustness to non-probabilistic uncertainty in the input parameters. This is an improvement to model validation and uncertainty quantification stemming beyond the originally proposed scope of the project. We have introduced a new metric for incorporating the concept of robustness into experiment-based validation of numerical models. This project has accounted for the graduation of two Ph.D. students (Kendra Van Buren and Josh Hegenderfer) and two M.S. students (Matthew Egeberg and Parker Shields). One of the doctoral students is now working in the nuclear engineering field and the other one is a post-doctoral fellow at the Los Alamos National Laboratory. Additionally, two more Ph.D. students (Garrison Stevens and Tunc Kulaksiz) who are working towards graduation have been supported by this project.« less

  3. Beam focal spot position determination for an Elekta linac with the Agility® head; practical guide with a ready-to-go procedure.

    PubMed

    Chojnowski, Jacek M; Taylor, Lee M; Sykes, Jonathan R; Thwaites, David I

    2018-05-14

    A novel phantomless, EPID-based method of measuring the beam focal spot offset of a linear accelerator was proposed and validated for Varian machines. In this method, one set of jaws and the MLC were utilized to form a symmetric field and then a 180 o collimator rotation was utilized to determine the radiation isocenter defined by the jaws and the MLC, respectively. The difference between these two isocentres is directly correlated with the beam focal spot offset of the linear accelerator. In the current work, the method has been considered for Elekta linacs. An Elekta linac with the Agility ® head does not have two set of jaws, therefore, a modified method is presented making use of one set of diaphragms, the MLC and a full 360 o collimator rotation. The modified method has been tested on two Elekta Synergy ® linacs with Agility ® heads and independently validated. A practical guide with instructions and a MATLAB ® code is attached for easy implementation. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  4. The Nuclear Energy Knowledge and Validation Center – Summary of Activities Conducted in FY15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans David; Hong, Bonnie Colleen

    2016-05-01

    The Nuclear Energy Knowledge and Validation Center (NEKVaC) is a new initiative by the Department of Energy and the Idaho National Laboratory to coordinate and focus the resources and expertise that exist with the DOE Complex toward solving issues in modern nuclear code validation. In time, code owners, users, and developers will view the Center as a partner and essential resource for acquiring the best practices and latest techniques for validating codes, for guidance in planning and executing experiments, for facilitating access to, and maximizing the usefulness of, existing data, and for preserving knowledge for continual use by nuclear professionalsmore » and organizations for their own validation needs. The scope of the center covers many inter-related activities which will need to be cultivated carefully in the near-term and managed properly once the Center is fully functional. Three areas comprise the principal mission: 1) identification and prioritization of projects that extend the field of validation science and its application to modern codes, 2) adapt or develop best practices and guidelines for high fidelity multiphysics/multiscale analysis code development and associated experiment design, and 3) define protocols for data acquisition and knowledge preservation and provide a portal for access to databases currently scattered among numerous organizations. These mission areas, while each having a unique focus, are inter-dependent and complementary. Likewise, all activities supported by the NEKVaC, both near-term and long-term), must possess elements supporting all three. This cross-cutting nature is essential to ensuring that activities and supporting personnel do not become ‘stove-piped’, i.e. focused so much on a specific function that the activity itself becomes the objective rather than the achieving the larger vision. Achieving the broader vision will require a healthy and accountable level of activity in each of the areas. This will take time and significant DOE support. Growing too fast (budget-wise) will not allow ideas to mature, lessons to be learned, and taxpayer money to be spent responsibly. The process should be initiated with a small set of tasks, executed over a short but reasonable term, that will exercise most if not all aspects of the Center’s potential operation. The initial activities described in this report have a high potential for near-term success in demonstrating Center objectives but also to work out some of the issues in task execution, communication between functional elements, and the ability to raise awareness of the Center and cement stakeholder buy-in. This report begins with a description of the Mission areas; specifically the role played by each and the types of activities for which they are responsible. It then lists and describes the proposed near-term tasks upon which future efforts can build.« less

  5. The Proteus Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Bui, Trong T.; Cavicchi, Richard H.; Conley, Julianne M.; Molls, Frank B.; Schwab, John R.

    1992-01-01

    An effort is currently underway at NASA Lewis to develop two- and three-dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. The emphasis in the development of Proteus is not algorithm development or research on numerical methods, but rather the development of the code itself. The objective is to develop codes that are user-oriented, easily-modified, and well-documented. Well-proven, state-of-the-art solution algorithms are being used. Code readability, documentation (both internal and external), and validation are being emphasized. This paper is a status report on the Proteus development effort. The analysis and solution procedure are described briefly, and the various features in the code are summarized. The results from some of the validation cases that have been run are presented for both the two- and three-dimensional codes.

  6. 45 CFR 162.1000 - General requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... sets. Use the applicable medical data code sets described in § 162.1002 as specified in the...) Nonmedical data code sets. Use the nonmedical data code sets as described in the implementation... Public Welfare Department of Health and Human Services ADMINISTRATIVE DATA STANDARDS AND RELATED...

  7. 45 CFR 162.1000 - General requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... sets. Use the applicable medical data code sets described in § 162.1002 as specified in the...) Nonmedical data code sets. Use the nonmedical data code sets as described in the implementation... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED...

  8. 45 CFR 162.1000 - General requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... sets. Use the applicable medical data code sets described in § 162.1002 as specified in the...) Nonmedical data code sets. Use the nonmedical data code sets as described in the implementation... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED...

  9. 45 CFR 162.1000 - General requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... sets. Use the applicable medical data code sets described in § 162.1002 as specified in the...) Nonmedical data code sets. Use the nonmedical data code sets as described in the implementation... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED...

  10. 45 CFR 162.1000 - General requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... sets. Use the applicable medical data code sets described in § 162.1002 as specified in the...) Nonmedical data code sets. Use the nonmedical data code sets as described in the implementation... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED...

  11. GENCODE: the reference human genome annotation for The ENCODE Project.

    PubMed

    Harrow, Jennifer; Frankish, Adam; Gonzalez, Jose M; Tapanari, Electra; Diekhans, Mark; Kokocinski, Felix; Aken, Bronwen L; Barrell, Daniel; Zadissa, Amonida; Searle, Stephen; Barnes, If; Bignell, Alexandra; Boychenko, Veronika; Hunt, Toby; Kay, Mike; Mukherjee, Gaurab; Rajan, Jeena; Despacio-Reyes, Gloria; Saunders, Gary; Steward, Charles; Harte, Rachel; Lin, Michael; Howald, Cédric; Tanzer, Andrea; Derrien, Thomas; Chrast, Jacqueline; Walters, Nathalie; Balasubramanian, Suganthi; Pei, Baikang; Tress, Michael; Rodriguez, Jose Manuel; Ezkurdia, Iakes; van Baren, Jeltje; Brent, Michael; Haussler, David; Kellis, Manolis; Valencia, Alfonso; Reymond, Alexandre; Gerstein, Mark; Guigó, Roderic; Hubbard, Tim J

    2012-09-01

    The GENCODE Consortium aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. Since the first public release of this annotation data set, few new protein-coding loci have been added, yet the number of alternative splicing transcripts annotated has steadily increased. The GENCODE 7 release contains 20,687 protein-coding and 9640 long noncoding RNA loci and has 33,977 coding transcripts not represented in UCSC genes and RefSeq. It also has the most comprehensive annotation of long noncoding RNA (lncRNA) loci publicly available with the predominant transcript form consisting of two exons. We have examined the completeness of the transcript annotation and found that 35% of transcriptional start sites are supported by CAGE clusters and 62% of protein-coding genes have annotated polyA sites. Over one-third of GENCODE protein-coding genes are supported by peptide hits derived from mass spectrometry spectra submitted to Peptide Atlas. New models derived from the Illumina Body Map 2.0 RNA-seq data identify 3689 new loci not currently in GENCODE, of which 3127 consist of two exon models indicating that they are possibly unannotated long noncoding loci. GENCODE 7 is publicly available from gencodegenes.org and via the Ensembl and UCSC Genome Browsers.

  12. History of one family of atmospheric radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Anderson, Gail P.; Wang, Jinxue; Hoke, Michael L.; Kneizys, F. X.; Chetwynd, James H., Jr.; Rothman, Laurence S.; Kimball, L. M.; McClatchey, Robert A.; Shettle, Eric P.; Clough, Shepard (.; Gallery, William O.; Abreu, Leonard W.; Selby, John E. A.

    1994-12-01

    Beginning in the early 1970's, the then Air Force Cambridge Research Laboratory initiated a program to develop computer-based atmospheric radiative transfer algorithms. The first attempts were translations of graphical procedures described in a 1970 report on The Optical Properties of the Atmosphere, based on empirical transmission functions and effective absorption coefficients derived primarily from controlled laboratory transmittance measurements. The fact that spectrally-averaged atmospheric transmittance (T) does not obey the Beer-Lambert Law (T equals exp(-(sigma) (DOT)(eta) ), where (sigma) is a species absorption cross section, independent of (eta) , the species column amount along the path) at any but the finest spectral resolution was already well known. Band models to describe this gross behavior were developed in the 1950's and 60's. Thus began LOWTRAN, the Low Resolution Transmittance Code, first released in 1972. This limited initial effort has how progressed to a set of codes and related algorithms (including line-of-sight spectral geometry, direct and scattered radiance and irradiance, non-local thermodynamic equilibrium, etc.) that contain thousands of coding lines, hundreds of subroutines, and improved accuracy, efficiency, and, ultimately, accessibility. This review will include LOWTRAN, HITRAN (atlas of high-resolution molecular spectroscopic data), FASCODE (Fast Atmospheric Signature Code), and MODTRAN (Moderate Resolution Transmittance Code), their permutations, validations, and applications, particularly as related to passive remote sensing and energy deposition.

  13. Assessing patient-centered communication in a family practice setting: how do we measure it, and whose opinion matters?

    PubMed

    Clayton, Margaret F; Latimer, Seth; Dunn, Todd W; Haas, Leonard

    2011-09-01

    This study evaluated variables thought to influence patient's perceptions of patient-centeredness. We also compared results from two coding schemes that purport to evaluate patient-centeredness, the Measure of Patient-Centered Communication (MPCC) and the 4 Habits Coding Scheme (4HCS). 174 videotaped family practice office visits, and patient self-report measures were analyzed. Patient factors contributing to positive perceptions of patient-centeredness were successful negotiation of decision-making roles and lower post-visit uncertainty. MPCC coding found visits were on average 59% patient-centered (range 12-85%). 4HCS coding showed an average of 83 points (maximum possible 115). However, patients felt their visits were highly patient-centered (mean 3.7, range 1.9-4; maximum possible 4). There was a weak correlation between coding schemes, but no association between coding results and patient variables (number of pre-visit concerns, attainment of desired decision-making role, post-visit uncertainty, patients' perception of patient-centeredness). Coder inter-rater reliability was lower than expected; convergent and divergent validity were not supported. The 4HCS and MPCC operationalize patient-centeredness differently, illustrating a lack of conceptual clarity. The patient's perspective is important. Family practice providers can facilitate a more positive patient perception of patient-centeredness by addressing patient concerns to help reduce patient uncertainty, and by negotiating decision-making roles. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Methodology for extracting local constants from petroleum cracking flows

    DOEpatents

    Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.

    2000-01-01

    A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.

  15. Validation of an algorithm to identify children with biopsy-proven celiac disease from within health administrative data: An assessment of health services utilization patterns in Ontario, Canada

    PubMed Central

    Chan, Jason; Mack, David R.; Manuel, Douglas G.; Mojaverian, Nassim; de Nanassy, Joseph

    2017-01-01

    Importance Celiac disease (CD) is a common pediatric illness, and awareness of gluten-related disorders including CD is growing. Health administrative data represents a unique opportunity to conduct population-based surveillance of this chronic condition and assess the impact of caring for children with CD on the health system. Objective The objective of the study was to validate an algorithm based on health administrative data diagnostic codes to accurately identify children with biopsy-proven CD. We also evaluated trends over time in the use of health services related to CD by children in Ontario, Canada. Study design and setting We conducted a retrospective cohort study and validation study of population-based health administrative data in Ontario, Canada. All cases of biopsy-proven CD diagnosed 2005–2011 in Ottawa were identified through chart review from a large pediatric health care center, and linked to the Ontario health administrative data to serve as positive reference standard. All other children living within Ottawa served as the negative reference standard. Case-identifying algorithms based on outpatient physician visits with associated ICD-9 code for CD plus endoscopy billing code were constructed and tested. Sensitivity, specificity, PPV and NPV were tested for each algorithm (with 95% CI). Poisson regression, adjusting for sex and age at diagnosis, was used to explore the trend in outpatient visits associated with a CD diagnostic code from 1995–2011. Results The best algorithm to identify CD consisted of an endoscopy billing claim follow by 1 or more adult or pediatric gastroenterologist encounters after the endoscopic procedure. The sensitivity, specificity, PPV, and NPV for the algorithm were: 70.4% (95% CI 61.1–78.4%), >99.9% (95% CI >99.9->99.9%), 53.3% (95% CI 45.1–61.4%) and >99.9% (95% CI >99.9->99.9%) respectively. It identified 1289 suspected CD cases from Ontario-wide administrative data. There was a 9% annual increase in the use of this combination of CD-associated diagnostic codes in physician billing data (RR 1.09, 95% CI 1.07–1.10, P<0.001). Conclusions With its current structure and variables Ontario health administrative data is not suitable in identifying incident pediatric CD cases. The tested algorithms suffer from poor sensitivity and/or poor PPV, which increase the risk of case misclassification that could lead to biased estimation of CD incidence rate. This study reinforced the importance of validating the codes used to identify cohorts or outcomes when conducting research using health administrative data. PMID:28662204

  16. Validating the BISON fuel performance code to integral LWR experiments

    DOE PAGES

    Williamson, R. L.; Gamble, K. A.; Perez, D. M.; ...

    2016-03-24

    BISON is a modern finite element-based nuclear fuel performance code that has been under development at the Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to datemore » for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Our results demonstrate that 1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, 2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and 3) comparison of rod diameter results indicates a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. In the initial rod diameter comparisons they were unsatisfactory and have lead to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less

  17. A systematic review of validated methods for identifying transfusion-related ABO incompatibility reactions using administrative and claims data.

    PubMed

    Carnahan, Ryan M; Kee, Vicki R

    2012-01-01

    This paper aimed to systematically review algorithms to identify transfusion-related ABO incompatibility reactions in administrative data, with a focus on studies that have examined the validity of the algorithms. A literature search was conducted using PubMed, Iowa Drug Information Service database, and Embase. A Google Scholar search was also conducted because of the difficulty identifying relevant studies. Reviews were conducted by two investigators to identify studies using data sources from the USA or Canada because these data sources were most likely to reflect the coding practices of Mini-Sentinel data sources. One study was found that validated International Classification of Diseases (ICD-9-CM) codes representing transfusion reactions. None of these cases were ABO incompatibility reactions. Several studies consistently used ICD-9-CM code 999.6, which represents ABO incompatibility reactions, and a technical report identified the ICD-10 code for these reactions. One study included the E-code E8760 for mismatched blood in transfusion in the algorithm. Another study reported finding no ABO incompatibility reaction codes in the Healthcare Cost and Utilization Project Nationwide Inpatient Sample database, which contains data of 2.23 million patients who received transfusions, raising questions about the sensitivity of administrative data for identifying such reactions. Two studies reported perfect specificity, with sensitivity ranging from 21% to 83%, for the code identifying allogeneic red blood cell transfusions in hospitalized patients. There is no information to assess the validity of algorithms to identify transfusion-related ABO incompatibility reactions. Further information on the validity of algorithms to identify transfusions would also be useful. Copyright © 2012 John Wiley & Sons, Ltd.

  18. A long non-coding RNA expression profile can predict early recurrence in hepatocellular carcinoma after curative resection.

    PubMed

    Lv, Yufeng; Wei, Wenhao; Huang, Zhong; Chen, Zhichao; Fang, Yuan; Pan, Lili; Han, Xueqiong; Xu, Zihai

    2018-06-20

    The aim of this study was to develop a novel long non-coding RNA (lncRNA) expression signature to accurately predict early recurrence for patients with hepatocellular carcinoma (HCC) after curative resection. Using expression profiles downloaded from The Cancer Genome Atlas database, we identified multiple lncRNAs with differential expression between early recurrence (ER) group and non-early recurrence (non-ER) group of HCC. Least absolute shrinkage and selection operator (LASSO) for logistic regression models were used to develop a lncRNA-based classifier for predicting ER in the training set. An independent test set was used to validated the predictive value of this classifier. Futhermore, a co-expression network based on these lncRNAs and its highly related genes was constructed and Gene Ontology and Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses of genes in the network were performed. We identified 10 differentially expressed lncRNAs, including 3 that were upregulated and 7 that were downregulated in ER group. The lncRNA-based classifier was constructed based on 7 lncRNAs (AL035661.1, PART1, AC011632.1, AC109588.1, AL365361.1, LINC00861 and LINC02084), and its accuracy was 0.83 in training set, 0.87 in test set and 0.84 in total set. And ROC curve analysis showed the AUROC was 0.741 in training set, 0.824 in the test set and 0.765 in total set. A functional enrichment analysis suggested that the genes of which is highly related to 4 lncRNAs were involved in immune system. This 7-lncRNA expression profile can effectively predict the early recurrence after surgical resection for HCC. This article is protected by copyright. All rights reserved.

  19. Use of the Physician Orders for Life-Sustaining Treatment program for patients being discharged from the hospital to the nursing facility.

    PubMed

    Hickman, Susan E; Nelson, Christine A; Smith-Howell, Esther; Hammes, Bernard J

    2014-01-01

    The Physician Orders for Life-Sustaining Treatment (POLST) documents patient preferences as medical orders that transfer across settings with patients. The objectives were to pilot test methods and gather preliminary data about POLST including (1) use at time of hospital discharge, (2) transfers across settings, and (3) consistency with prior decisions. Descriptive with chart abstraction and interviews. Participants were hospitalized patients discharged to a nursing facility and/or their surrogates in La Crosse County, Wisconsin. POLST forms were abstracted from hospital records for 151 patients. Hospital and nursing facility chart data were abstracted and interviews were conducted with an additional 39 patients/surrogates. Overall, 176 patients had valid POLST forms at the time of discharge from the hospital, and many (38.6%; 68/176) only documented code status. When the whole POLST was completed, orders were more often marked as based on a discussion with the patient and/or surrogate than when the form was used just for code status (95.1% versus 13.8%, p<.001). In the follow-up and interview sample, a majority (90.6%; 29/32) of POLST forms written in the hospital were unchanged up to three weeks after nursing facility admission. Most (71.9%; 23/32) appeared consistent with patient or surrogate recall of prior treatment decisions. POLST forms generated in the hospital do transfer with patients across settings, but are often used only to document code status. POLST orders appeared largely consistent with prior treatment decisions. Further research is needed to assess the quality of POLST decisions.

  20. Integration of SimSET photon history generator in GATE for efficient Monte Carlo simulations of pinhole SPECT.

    PubMed

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W

    2008-07-01

    The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.

  1. Triaxial instabilities in rapidly rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Basak, Arkadip

    2018-06-01

    Viscosity driven bar mode secular instabilities of rapidly rotating neutron stars are studied using LORENE/Nrotstar code. These instabilities set a more rigorous limit to the rotation frequency of a neutron star than the Kepler frequency/mass-shedding limit. The procedure employed in the code comprises of perturbing an axisymmetric and stationary configuration of a neutron star and studying its evolution by constructing a series of triaxial quasi-equilibrium configurations. Symmetry breaking point was found out for Polytropic as well as 10 realistic equations of states (EOS) from the CompOSE data base. The concept of piecewise polytropic EOSs has been used to comprehend the rotational instability of Realistic EOSs and validated with 19 different Realistic EOSs from CompOSE. The possibility of detecting quasi-periodic gravitational waves from viscosity driven instability with ground-based LIGO/VIRGO interferometers is also discussed very briefly.

  2. Performance analysis of LDPC codes on OOK terahertz wireless channels

    NASA Astrophysics Data System (ADS)

    Chun, Liu; Chang, Wang; Jun-Cheng, Cao

    2016-02-01

    Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).

  3. MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows

    NASA Astrophysics Data System (ADS)

    van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro

    2017-08-01

    This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.

  4. Recent Experimental Results Related to Ejector Mode Studies of Rocket-Based Combined Cycle (RBCC) Engines

    NASA Technical Reports Server (NTRS)

    Cramer, J. M.; Pal, S.; Marshall, W. M.; Santoro, R. J.

    2003-01-01

    Contents include the folloving: 1. Motivation. Support NASA's 3d generation launch vehicle technology program. RBCC is promising candidate for 3d generation propulsion system. 2. Approach. Focus on ejector mode p3erformance (Mach 0-3). Perform testing on established flowpath geometry. Use conventional propulsion measurement techniques. Use advanced optical diagnostic techniques to measure local combustion gas properties. 3. Objectives. Gain physical understanding of detailing mixing and combustion phenomena. Establish an experimental data set for CFD code development and validation.

  5. CFD Code Development for Combustor Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.

  6. Validity of the coding for herpes simplex encephalitis in the Danish National Patient Registry.

    PubMed

    Jørgensen, Laura Krogh; Dalgaard, Lars Skov; Østergaard, Lars Jørgen; Andersen, Nanna Skaarup; Nørgaard, Mette; Mogensen, Trine Hyrup

    2016-01-01

    Large health care databases are a valuable source of infectious disease epidemiology if diagnoses are valid. The aim of this study was to investigate the accuracy of the recorded diagnosis coding of herpes simplex encephalitis (HSE) in the Danish National Patient Registry (DNPR). The DNPR was used to identify all hospitalized patients, aged ≥15 years, with a first-time diagnosis of HSE according to the International Classification of Diseases, tenth revision (ICD-10), from 2004 to 2014. To validate the coding of HSE, we collected data from the Danish Microbiology Database, from departments of clinical microbiology, and from patient medical records. Cases were classified as confirmed, probable, or no evidence of HSE. We estimated the positive predictive value (PPV) of the HSE diagnosis coding stratified by diagnosis type, study period, and department type. Furthermore, we estimated the proportion of HSE cases coded with nonspecific ICD-10 codes of viral encephalitis and also the sensitivity of the HSE diagnosis coding. We were able to validate 398 (94.3%) of the 422 HSE diagnoses identified via the DNPR. Hereof, 202 (50.8%) were classified as confirmed cases and 29 (7.3%) as probable cases providing an overall PPV of 58.0% (95% confidence interval [CI]: 53.0-62.9). For "Encephalitis due to herpes simplex virus" (ICD-10 code B00.4), the PPV was 56.6% (95% CI: 51.1-62.0). Similarly, the PPV for "Meningoencephalitis due to herpes simplex virus" (ICD-10 code B00.4A) was 56.8% (95% CI: 39.5-72.9). "Herpes viral encephalitis" (ICD-10 code G05.1E) had a PPV of 75.9% (95% CI: 56.5-89.7), thereby representing the highest PPV. The estimated sensitivity was 95.5%. The PPVs of the ICD-10 diagnosis coding for adult HSE in the DNPR were relatively low. Hence, the DNPR should be used with caution when studying patients with encephalitis caused by herpes simplex virus.

  7. Integrated analysis of long non-coding RNAs in human gastric cancer: An in silico study.

    PubMed

    Han, Weiwei; Zhang, Zhenyu; He, Bangshun; Xu, Yijun; Zhang, Jun; Cao, Weijun

    2017-01-01

    Accumulating evidence highlights the important role of long non-coding RNAs (lncRNAs) in a large number of biological processes. However, the knowledge of genome scale expression of lncRNAs and their potential biological function in gastric cancer is still lacking. Using RNA-seq data from 420 gastric cancer patients in The Cancer Genome Atlas (TCGA), we identified 1,294 lncRNAs differentially expressed in gastric cancer compared with adjacent normal tissues. We also found 247 lncRNAs differentially expressed between intestinal subtype and diffuse subtype. Survival analysis revealed 33 lncRNAs independently associated with patient overall survival, of which 6 lncRNAs were validated in the internal validation set. There were 181 differentially expressed lncRNAs located in the recurrent somatic copy number alterations (SCNAs) regions and their correlations between copy number and RNA expression level were also analyzed. In addition, we inferred the function of lncRNAs by construction of a co-expression network for mRNAs and lncRNAs. Together, this study presented an integrative analysis of lncRNAs in gastric cancer and provided a valuable resource for further functional research of lncRNAs in gastric cancer.

  8. MCNPX simulation of proton dose distribution in homogeneous and CT phantoms

    NASA Astrophysics Data System (ADS)

    Lee, C. C.; Lee, Y. J.; Tung, C. J.; Cheng, H. W.; Chao, T. C.

    2014-02-01

    A dose simulation system was constructed based on the MCNPX Monte Carlo package to simulate proton dose distribution in homogeneous and CT phantoms. Conversion from Hounsfield unit of a patient CT image set to material information necessary for Monte Carlo simulation is based on Schneider's approach. In order to validate this simulation system, inter-comparison of depth dose distributions among those obtained from the MCNPX, GEANT4 and FLUKA codes for a 160 MeV monoenergetic proton beam incident normally on the surface of a homogeneous water phantom was performed. For dose validation within the CT phantom, direct comparison with measurement is infeasible. Instead, this study took the approach to indirectly compare the 50% ranges (R50%) along the central axis by our system to the NIST CSDA ranges for beams with 160 and 115 MeV energies. Comparison result within the homogeneous phantom shows good agreement. Differences of simulated R50% among the three codes are less than 1 mm. For results within the CT phantom, the MCNPX simulated water equivalent Req,50% are compatible with the CSDA water equivalent ranges from the NIST database with differences of 0.7 and 4.1 mm for 160 and 115 MeV beams, respectively.

  9. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    PubMed

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. AMIDE: a free software tool for multimodality medical image analysis.

    PubMed

    Loening, Andreas Markus; Gambhir, Sanjiv Sam

    2003-07-01

    Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.

  11. Threats to Validity When Using Open-Ended Items in International Achievement Studies: Coding Responses to the PISA 2012 Problem-Solving Test in Finland

    ERIC Educational Resources Information Center

    Arffman, Inga

    2016-01-01

    Open-ended (OE) items are widely used to gather data on student performance in international achievement studies. However, several factors may threaten validity when using such items. This study examined Finnish coders' opinions about threats to validity when coding responses to OE items in the PISA 2012 problem-solving test. A total of 6…

  12. Solution of the lossy nonlinear Tricomi equation with application to sonic boom focusing

    NASA Astrophysics Data System (ADS)

    Salamone, Joseph A., III

    Sonic boom focusing theory has been augmented with new terms that account for mean flow effects in the direction of propagation and also for atmospheric absorption/dispersion due to molecular relaxation due to oxygen and nitrogen. The newly derived model equation was numerically implemented using a computer code. The computer code was numerically validated using a spectral solution for nonlinear propagation of a sinusoid through a lossy homogeneous medium. An additional numerical check was performed to verify the linear diffraction component of the code calculations. The computer code was experimentally validated using measured sonic boom focusing data from the NASA sponsored Superboom Caustic and Analysis Measurement Program (SCAMP) flight test. The computer code was in good agreement with both the numerical and experimental validation. The newly developed code was applied to examine the focusing of a NASA low-boom demonstration vehicle concept. The resulting pressure field was calculated for several supersonic climb profiles. The shaping efforts designed into the signatures were still somewhat evident despite the effects of sonic boom focusing.

  13. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... code sets inherent to a transaction, and not related to the format of the transaction. Data elements... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  14. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... code sets inherent to a transaction, and not related to the format of the transaction. Data elements... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  15. Validation of NASA Thermal Ice Protection Computer Codes Part 2 - LEWICE/Thermal

    DOT National Transportation Integrated Search

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center: LEWICE/Thermal 1 (electrothermal de-icing and anti-icing), and ANTICE 2 (hot gas and el...

  16. Comparative Modelling of the Spectra of Cool Giants

    NASA Technical Reports Server (NTRS)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  17. VizieR Online Data Catalog: Analytical model for irradiated atmospheres (Parmentier+, 2015)

    NASA Astrophysics Data System (ADS)

    Parmentier, V.; Guillot, T.; Fortney, J.; Marley, M.

    2014-11-01

    The model has six parameters to describe the opacities: - {kappa}(N) is the Rosseland mean opacity at each levels of the atmosphere it does not have to be constant with depth. - Gp is the ratio of the thermal Plank mean opacity to the thermal Rosseland mean opacity. - Beta is the width ratio of the two thermal bands in the frequency space. - Gv1 is the ratio of the visible opacity in the first visible band to the thermal Rosseland mean opacity - Gv2 is the ratio of the visible opacity in the second visible band to the thermal Rosseland mean opacity - Gv3 is the ratio of the visible opacity in the second visible band to the thermal Rosseland mean opacity Each visible band has a fixed width of 1/3. Additional parameters describe the physical setting: - Teq0 is the equilibrium temperature of the planet for 0 albedo and full redistribution of energy. - mu is the angle between the vertical direction and the stellar direction. For average profiles set mu=1/sqrt(3) - f is a parameter equal to 0.5 to compute a dayside average profile and 0.25 for planet average profile. - Tint is the internal temperature, given by the internal luminosity - grav is the gravity of the planet - Ab is the Bond albedo of the planet - P(i) are the pressure levels where the temperature is computed. - N is the number of atmospheric levels. Several options are available in order to use the coefficients derived in Parmentier et al. (2014A&A...562A.133P, Cat. J/A+A/562/A133): ROSS can take the values : - "USER" for a Rosseland mean opacity set by the user {kappa}(nlevels) through the atmosphere. - "AUTO" in order to use {kappa}(P,T), the functional form of the Rosseland mean opacities provided by Valencia et al. (2013ApJ...775...10V) and based on the opacities calculated by Freedman et al. (2008ApJS..174..504F). The value of {kappa} is then recalculated and the initial value set by the user is NOT taken into account. COEFF can take the values : - "USER" for coefficients set by the user - "AUTO" for using the fit of the coefficients provided in Parmentier et al. (2014A&A...562A.133P, Cat. J/A+A/562/A133). In that case all the coefficients set by the user are NOT taken into account (apart for the Rosseland mean opacities) COMP can take the values (Valid only if COEFF="AUTO") : - "SOLAR" to use the fit of the coefficients for a solar composition atmosphere - "NOTIO" to use the fit of the coefficients without TiO STAR can take the value (Valid only if COEFF="AUTO"): - "SUN" to use the fit of the coefficients for a sun-like stellar irradiation ALBEDO can thake the value : - "USER" for a user defined albedo - "AUTO" to use the fit of the albedos for solar-composition, clear-sky atmospheres CONV can be either : - "NO" for a pure radiative solution - "YES" for a radiative/convective solution (without taking into account detached convective layers) The code and all the outputs uses SI units. Installation and use : to install the code use the command "make". To test use "make test". The test should be done with the downloaded version of the code, without any changes. To execute the code, once it has been compiled, type ./NonGrey in the same directory.This will output a file PTprofile.csv with the temperature structure in csv format and a file PTprofile.dat in dat format. The input parameters must be changed inside the file paper2.f90. It is necessary to compile the code again each time. The subroutine tprofile2e.f90 can be directly implemented into one's code. (5 data files).

  18. Validation of Ray Tracing Code Refraction Effects

    NASA Technical Reports Server (NTRS)

    Heath, Stephanie L.; McAninch, Gerry L.; Smith, Charles D.; Conner, David A.

    2008-01-01

    NASA's current predictive capabilities using the ray tracing program (RTP) are validated using helicopter noise data taken at Eglin Air Force Base in 2007. By including refractive propagation effects due to wind and temperature, the ray tracing code is able to explain large variations in the data observed during the flight test.

  19. Validation of a Communication Process Measure for Coding Control in Counseling.

    ERIC Educational Resources Information Center

    Heatherington, Laurie

    The increasingly popular view of the counseling process from an interactional perspective necessitates the development of new measurement instruments which are suitable to the study of the reciprocal interaction between people. The validity of the Relational Communication Coding System, an instrument which operationalizes the constructs of…

  20. Positive predictive value of cardiac examination, procedure and surgery codes in the Danish National Patient Registry: a population-based validation study

    PubMed Central

    Adelborg, Kasper; Sundbøll, Jens; Munch, Troels; Frøslev, Trine; Sørensen, Henrik Toft; Bøtker, Hans Erik; Schmidt, Morten

    2016-01-01

    Objective Danish medical registries are widely used for cardiovascular research, but little is known about the data quality of cardiac interventions. We computed positive predictive values (PPVs) of codes for cardiac examinations, procedures and surgeries registered in the Danish National Patient Registry during 2010–2012. Design Population-based validation study. Setting We randomly sampled patients from 1 university hospital and 2 regional hospitals in the Central Denmark Region. Participants 1239 patients undergoing different cardiac interventions. Main outcome measure PPVs with medical record review as reference standard. Results A total of 1233 medical records (99% of the total sample) were available for review. PPVs ranged from 83% to 100%. For examinations, the PPV was overall 98%, reflecting PPVs of 97% for echocardiography, 97% for right heart catheterisation and 100% for coronary angiogram. For procedures, the PPV was 98% overall, with PPVs of 98% for thrombolysis, 92% for cardioversion, 100% for radiofrequency ablation, 98% for percutaneous coronary intervention, and 100% for both cardiac pacemakers and implantable cardiac defibrillators. For cardiac surgery, the overall PPVs was 99%, encompassing PPVs of 100% for mitral valve surgery, 99% for aortic valve surgery, 98% for coronary artery bypass graft surgery, and 100% for heart transplantation. The accuracy of coding was consistent within age, sex, and calendar year categories, and the agreement between independent reviewers was high (99%). Conclusions Cardiac examinations, procedures and surgeries have high PPVs in the Danish National Patient Registry. PMID:27940630

  1. A clinical decision rule to prioritize polysomnography in patients with suspected sleep apnea.

    PubMed

    Rodsutti, Julvit; Hensley, Michael; Thakkinstian, Ammarin; D'Este, Catherine; Attia, John

    2004-06-15

    To derive and validate a clinical decision rule that can help to prioritize patients who are on waiting lists for polysomnography, Prospective data collection on consecutive patients referred to a sleep center. The Newcastle Sleep Disorders Centre, University of Newcastle, NSW, Australia. Consecutive adult patients who had been scheduled for initial diagnostic polysomnography. Eight hundred and thirty-seven patients were used for derivation of the decision rule. An apnea-hypopnoea index of at least 5 was used as the cutoff point to diagnose sleep apnea. Fifteen clinical features were included in the analyses using logistic regression to construct a model from the derivation data set. Only 5 variables--age, sex, body mass index, snoring, and stopping breathing during sleep--were significantly associated with sleep apnea. A scoring scheme based on regression coefficients was developed, and the total score was trichotomized into low-, moderate-, and high-risk groups with prevalence of sleep apnea of 8%, 51%, and 82%, respectively. Color-coded tables were developed for ease of use. The clinical decision rule was validated on a separate set of 243 patients. Receiver operating characteristic analysis confirmed that the decision rule performed well, with the area under the curve being similar for both the derivation and validation sets: 0.81 and 0.79, P =.612. We conclude that this decision rule was able to accurately classify the risk of sleep apnea and will be useful for prioritizing patients with suspected sleep apnea who are on waiting lists for polysomnography.

  2. Rapid Trust Establishment for Transient Use of Unmanaged Hardware

    DTIC Science & Technology

    2006-12-01

    unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: Establishing...Validate OS Trusted Host OS (From Disk) Validate App 1 Untrusted code Trusted code (a) Boot with trust initiator ( b ) Boot trusted Host OS (c) Launch...be validated. Execution of process with Id 3535 has been blocked to minimize security risks. ( b ) Notification to the user from the trust alerter

  3. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  4. CFD Modeling of Free-Piston Stirling Engines

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.

    2001-01-01

    NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.

  5. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  6. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Astrophysics Data System (ADS)

    Chitsomboon, Tawit

    1992-02-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  7. Evaluation in industry of a draft code of practice for manual handling.

    PubMed

    Ashby, Liz; Tappin, David; Bentley, Tim

    2004-05-01

    This paper reports findings from a study which evaluated the draft New Zealand Code of Practice for Manual Handling. The evaluation assessed the ease of use, applicability and validity of the Code and in particular the associated manual handling hazard assessment tools, within New Zealand industry. The Code was studied in a sample of eight companies from four sectors of industry. Subjective feedback and objective findings indicated that the Code was useful, applicable and informative. The manual handling hazard assessment tools incorporated in the Code could be adequately applied by most users, with risk assessment outcomes largely consistent with the findings of researchers using more specific ergonomics methodologies. However, some changes were recommended to the risk assessment tools to improve usability and validity. The evaluation concluded that both the Code and the tools within it would benefit from simplification, improved typography and layout, and industry-specific information on manual handling hazards.

  8. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE PAGES

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    2018-02-21

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  9. Simulation of the AC corona phenomenon with experimental validation

    NASA Astrophysics Data System (ADS)

    Villa, Andrea; Barbieri, Luca; Marco, Gondola; Malgesini, Roberto; Leon-Garzon, Andres R.

    2017-11-01

    The corona effect, and in particular the Trichel phenomenon, is an important aspect of plasma physics with many technical applications, such as pollution reduction, surface and medical treatments. This phenomenon is also associated with components used in the power industry where it is, in many cases, the source of electro-magnetic disturbance, noise and production of undesired chemically active species. Despite the power industry to date using mainly alternating current (AC) transmission, most of the studies related to the corona effect have been carried out with direct current (DC) sources. Therefore, there is technical interest in validating numerical codes capable of simulating the AC phenomenon. In this work we describe a set of partial differential equations that are comprehensive enough to reproduce the distinctive features of the corona in an AC regime. The model embeds some selectable chemical databases, comprising tens of chemical species and hundreds of reactions, the thermal dynamics of neutral species and photoionization. A large set of parameters—deduced from experiments and numerical estimations—are compared, to assess the effectiveness of the proposed approach.

  10. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  11. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing.

    PubMed

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D

    2014-10-01

    We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.

  12. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing

    PubMed Central

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.

    2014-01-01

    Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051

  13. A CFD Model for High Pressure Liquid Poison Injection for CANDU-6 Shutdown System No. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo Wook Rhee; Chang Jun Jeong; Hye Jeong Yun

    2002-07-01

    In CANDU reactor one of the two reactor shutdown systems is the liquid poison injection system which injects the highly pressurized liquid neutron poison into the moderator tank via small holes on the nozzle pipes. To ensure the safe shutdown of a reactor it is necessary for the poison curtains generated by jets provide quick, and enough negative reactivity to the reactor during the early stage of the accident. In order to produce the neutron cross section necessary to perform this work, the poison concentration distribution during the transient is necessary. In this study, a set of models for analyzingmore » the transient poison concentration induced by this high pressure poison injection jet activated upon the reactor trip in a CANDU-6 reactor moderator tank has been developed and used to generate the poison concentration distribution of the poison curtains induced by the high pressure jets injected into the vacant region between the pressure tube banks. The poison injection rate through the jet holes drilled on the nozzle pipes is obtained by a 1-D transient hydrodynamic code called, ALITRIG, and this injection rate is used to provide the inlet boundary condition to a 3-D CFD model of the moderator tank based on CFX4.3, a CFD code, to simulate the formation of the poison jet curtain inside the moderator tank. For validation, an attempt was made to validate this model against a poison injection experiment performed at BARC. As conclusion this set of models is judged to be appropriate. (authors)« less

  14. Experimental program for real gas flow code validation at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.; Strawa, Anthony W.; Sharma, Surendra P.; Park, Chul

    1989-01-01

    The experimental program for validating real gas hypersonic flow codes at NASA Ames Rsearch Center is described. Ground-based test facilities used include ballistic ranges, shock tubes and shock tunnels, arc jet facilities and heated-air hypersonic wind tunnels. Also included are large-scale computer systems for kinetic theory simulations and benchmark code solutions. Flight tests consist of the Aeroassist Flight Experiment, the Space Shuttle, Project Fire 2, and planetary probes such as Galileo, Pioneer Venus, and PAET.

  15. Validation: Codes to compare simulation data to various observations

    NASA Astrophysics Data System (ADS)

    Cohn, J. D.

    2017-02-01

    Validation provides codes to compare several observations to simulated data with stellar mass and star formation rate, simulated data stellar mass function with observed stellar mass function from PRIMUS or SDSS-GALEX in several redshift bins from 0.01-1.0, and simulated data B band luminosity function with observed stellar mass function, and to create plots for various attributes, including stellar mass functions, and stellar mass to halo mass. These codes can model predictions (in some cases alongside observational data) to test other mock catalogs.

  16. Invalid before impaired: an emerging paradox of embedded validity indicators.

    PubMed

    Erdodi, Laszlo A; Lichtenstein, Jonathan D

    Embedded validity indicators (EVIs) are cost-effective psychometric tools to identify non-credible response sets during neuropsychological testing. As research on EVIs expands, assessors are faced with an emerging contradiction: the range of credible impairment disappears between the 'normal' and 'invalid' range of performance. We labeled this phenomenon as the invalid-before-impaired paradox. This study was designed to explore the origin of this psychometric anomaly, subject it to empirical investigation, and generate potential solutions. Archival data were analyzed from a mixed clinical sample of 312 (M Age  = 45.2; M Education  = 13.6) patients medically referred for neuropsychological assessment. The distribution of scores on eight subtests of the third and fourth editions of Wechsler Adult Intelligence Scale (WAIS) were examined in relation to the standard normal curve and two performance validity tests (PVTs). Although WAIS subtests varied in their sensitivity to non-credible responding, they were all significant predictors of performance validity. While subtests previously identified as EVIs (Digit Span, Coding, and Symbol Search) were comparably effective at differentiating credible and non-credible response sets, their classification accuracy was driven by their base rate of low scores, requiring different cutoffs to achieve comparable specificity. Invalid performance had a global effect on WAIS scores. Genuine impairment and non-credible performance can co-exist, are often intertwined, and may be psychometrically indistinguishable. A compromise between the alpha and beta bias on PVTs based on a balanced, objective evaluation of the evidence that requires concessions from both sides is needed to maintain/restore the credibility of performance validity assessment.

  17. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  18. Open-source platform to benchmark fingerprints for ligand-based virtual screening

    PubMed Central

    2013-01-01

    Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588

  19. NASA Rotor 37 CFD Code Validation: Glenn-HT Code

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2010-01-01

    In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.

  20. Data Visualization and Analysis Tools for the Global Precipitation Measurement (GPM) Validation Network

    NASA Technical Reports Server (NTRS)

    Morris, Kenneth R.; Schwaller, Mathew

    2010-01-01

    The Validation Network (VN) prototype for the Global Precipitation Measurement (GPM) Mission compares data from the Tropical Rainfall Measuring Mission (TRMM) satellite Precipitation Radar (PR) to similar measurements from U.S. and international operational weather radars. This prototype is a major component of the GPM Ground Validation System (GVS). The VN provides a means for the precipitation measurement community to identify and resolve significant discrepancies between the ground radar (GR) observations and similar satellite observations. The VN prototype is based on research results and computer code described by Anagnostou et al. (2001), Bolen and Chandrasekar (2000), and Liao et al. (2001), and has previously been described by Morris, et al. (2007). Morris and Schwaller (2009) describe the PR-GR volume-matching algorithm used to create the VN match-up data set used for the comparisons. This paper describes software tools that have been developed for visualization and statistical analysis of the original and volume matched PR and GR data.

  1. Ab initio analytical Raman intensities for periodic systems through a coupled perturbed Hartree-Fock/Kohn-Sham method in an atomic orbital basis. II. Validation and comparison with experiments

    NASA Astrophysics Data System (ADS)

    Maschio, Lorenzo; Kirtman, Bernard; Rérat, Michel; Orlando, Roberto; Dovesi, Roberto

    2013-10-01

    In this work, we validate a new, fully analytical method for calculating Raman intensities of periodic systems, developed and presented in Paper I [L. Maschio, B. Kirtman, M. Rérat, R. Orlando, and R. Dovesi, J. Chem. Phys. 139, 164101 (2013)]. Our validation of this method and its implementation in the CRYSTAL code is done through several internal checks as well as comparison with experiment. The internal checks include consistency of results when increasing the number of periodic directions (from 0D to 1D, 2D, 3D), comparison with numerical differentiation, and a test of the sum rule for derivatives of the polarizability tensor. The choice of basis set as well as the Hamiltonian is also studied. Simulated Raman spectra of α-quartz and of the UiO-66 Metal-Organic Framework are compared with the experimental data.

  2. Design optimization of beta- and photovoltaic conversion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wichner, R.; Blum, A.; Fischer-Colbrie, E.

    1976-01-08

    This report presents the theoretical and experimental results of an LLL Electronics Engineering research program aimed at optimizing the design and electronic-material parameters of beta- and photovoltaic p-n junction conversion devices. To meet this objective, a comprehensive computer code has been developed that can handle a broad range of practical conditions. The physical model upon which the code is based is described first. Then, an example is given of a set of optimization calculations along with the resulting optimized efficiencies for silicon (Si) and gallium-arsenide (GaAs) devices. The model we have developed, however, is not limited to these materials. Itmore » can handle any appropriate material--single or polycrystalline-- provided energy absorption and electron-transport data are available. To check code validity, the performance of experimental silicon p-n junction devices (produced in-house) were measured under various light intensities and spectra as well as under tritium beta irradiation. The results of these tests were then compared with predicted results based on the known or best estimated device parameters. The comparison showed very good agreement between the calculated and the measured results.« less

  3. Current and planned numerical development for improving computing performance for long duration and/or low pressure transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faydide, B.

    1997-07-01

    This paper presents the current and planned numerical development for improving computing performance in case of Cathare applications needing real time, like simulator applications. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the general characteristics of the code are presented, dealing with physical models, numerical topics, and validation strategy. Then, the current and planned applications of Cathare in the field of simulators are discussed. Some of these applications were made in the past, using a simplified and fast-running version of Cathare (Cathare-Simu); the status of the numerical improvements obtained withmore » Cathare-Simu is presented. The planned developments concern mainly the Simulator Cathare Release (SCAR) project which deals with the use of the most recent version of Cathare inside simulators. In this frame, the numerical developments are related with the speed up of the calculation process, using parallel processing and improvement of code reliability on a large set of NPP transients.« less

  4. Benchmark Simulations of the Thermal-Hydraulic Responses during EBR-II Inherent Safety Tests using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Sumner, Tyler S.

    2016-04-17

    An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less

  5. Quantifying Risk for Anxiety Disorders in Preschool Children: A Machine Learning Approach.

    PubMed

    Carpenter, Kimberly L H; Sprechmann, Pablo; Calderbank, Robert; Sapiro, Guillermo; Egger, Helen L

    2016-01-01

    Early childhood anxiety disorders are common, impairing, and predictive of anxiety and mood disorders later in childhood. Epidemiological studies over the last decade find that the prevalence of impairing anxiety disorders in preschool children ranges from 0.3% to 6.5%. Yet, less than 15% of young children with an impairing anxiety disorder receive a mental health evaluation or treatment. One possible reason for the low rate of care for anxious preschoolers is the lack of affordable, timely, reliable and valid tools for identifying young children with clinically significant anxiety. Diagnostic interviews assessing psychopathology in young children require intensive training, take hours to administer and code, and are not available for use outside of research settings. The Preschool Age Psychiatric Assessment (PAPA) is a reliable and valid structured diagnostic parent-report interview for assessing psychopathology, including anxiety disorders, in 2 to 5 year old children. In this paper, we apply machine-learning tools to already collected PAPA data from two large community studies to identify sub-sets of PAPA items that could be developed into an efficient, reliable, and valid screening tool to assess a young child's risk for an anxiety disorder. Using machine learning, we were able to decrease by an order of magnitude the number of items needed to identify a child who is at risk for an anxiety disorder with an accuracy of over 96% for both generalized anxiety disorder (GAD) and separation anxiety disorder (SAD). Additionally, rather than considering GAD or SAD as discrete/binary entities, we present a continuous risk score representing the child's risk of meeting criteria for GAD or SAD. Identification of a short question-set that assesses risk for an anxiety disorder could be a first step toward development and validation of a relatively short screening tool feasible for use in pediatric clinics and daycare/preschool settings.

  6. Quantifying Risk for Anxiety Disorders in Preschool Children: A Machine Learning Approach

    PubMed Central

    Calderbank, Robert; Sapiro, Guillermo; Egger, Helen L.

    2016-01-01

    Early childhood anxiety disorders are common, impairing, and predictive of anxiety and mood disorders later in childhood. Epidemiological studies over the last decade find that the prevalence of impairing anxiety disorders in preschool children ranges from 0.3% to 6.5%. Yet, less than 15% of young children with an impairing anxiety disorder receive a mental health evaluation or treatment. One possible reason for the low rate of care for anxious preschoolers is the lack of affordable, timely, reliable and valid tools for identifying young children with clinically significant anxiety. Diagnostic interviews assessing psychopathology in young children require intensive training, take hours to administer and code, and are not available for use outside of research settings. The Preschool Age Psychiatric Assessment (PAPA) is a reliable and valid structured diagnostic parent-report interview for assessing psychopathology, including anxiety disorders, in 2 to 5 year old children. In this paper, we apply machine-learning tools to already collected PAPA data from two large community studies to identify sub-sets of PAPA items that could be developed into an efficient, reliable, and valid screening tool to assess a young child’s risk for an anxiety disorder. Using machine learning, we were able to decrease by an order of magnitude the number of items needed to identify a child who is at risk for an anxiety disorder with an accuracy of over 96% for both generalized anxiety disorder (GAD) and separation anxiety disorder (SAD). Additionally, rather than considering GAD or SAD as discrete/binary entities, we present a continuous risk score representing the child’s risk of meeting criteria for GAD or SAD. Identification of a short question-set that assesses risk for an anxiety disorder could be a first step toward development and validation of a relatively short screening tool feasible for use in pediatric clinics and daycare/preschool settings. PMID:27880812

  7. Validation of Case Finding Algorithms for Hepatocellular Cancer from Administrative Data and Electronic Health Records using Natural Language Processing

    PubMed Central

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2013-01-01

    Background Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC ICD-9 codes, and evaluated whether natural language processing (NLP) by the Automated Retrieval Console (ARC) for document classification improves HCC identification. Methods We identified a cohort of patients with ICD-9 codes for HCC during 2005–2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared to manual classification. PPV, sensitivity, and specificity of ARC were calculated. Results 1138 patients with HCC were identified by ICD-9 codes. Based on manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. Conclusion A combined approach of ICD-9 codes and NLP of pathology and radiology reports improves HCC case identification in automated data. PMID:23929403

  8. Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.

    PubMed

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2016-02-01

    Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.

  9. Validity of Administrative Data in Identifying Cancer-related Events in Adolescents and Young Adults: A Population-based Study Using the IMPACT Cohort.

    PubMed

    Gupta, Sumit; Nathan, Paul C; Baxter, Nancy N; Lau, Cindy; Daly, Corinne; Pole, Jason D

    2018-06-01

    Despite the importance of estimating population level cancer outcomes, most registries do not collect critical events such as relapse. Attempts to use health administrative data to identify these events have focused on older adults and have been mostly unsuccessful. We developed and tested administrative data-based algorithms in a population-based cohort of adolescents and young adults with cancer. We identified all Ontario adolescents and young adults 15-21 years old diagnosed with leukemia, lymphoma, sarcoma, or testicular cancer between 1992-2012. Chart abstraction determined the end of initial treatment (EOIT) date and subsequent cancer-related events (progression, relapse, second cancer). Linkage to population-based administrative databases identified fee and procedure codes indicating cancer treatment or palliative care. Algorithms determining EOIT based on a time interval free of treatment-associated codes, and new cancer-related events based on billing codes, were compared with chart-abstracted data. The cohort comprised 1404 patients. Time periods free of treatment-associated codes did not validly identify EOIT dates; using subsequent codes to identify new cancer events was thus associated with low sensitivity (56.2%). However, using administrative data codes that occurred after the EOIT date based on chart abstraction, the first cancer-related event was identified with excellent validity (sensitivity, 87.0%; specificity, 93.3%; positive predictive value, 81.5%; negative predictive value, 95.5%). Although administrative data alone did not validly identify cancer-related events, administrative data in combination with chart collected EOIT dates was associated with excellent validity. The collection of EOIT dates by cancer registries would significantly expand the potential of administrative data linkage to assess cancer outcomes.

  10. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  11. Validation of Hydrodynamic Load Models Using CFD for the OC4-DeepCwind Semisubmersible: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.

    Computational fluid dynamics (CFD) simulations were carried out on the OC4-DeepCwind semi-submersible to obtain a better understanding of how to set hydrodynamic coefficients for the structure when using an engineering tool such as FAST to model the system. The focus here was on the drag behavior and the effects of the free-surface, free-ends and multi-member arrangement of the semi-submersible structure. These effects are investigated through code-to-code comparisons and flow visualizations. The implications on mean load predictions from engineering tools are addressed. The work presented here suggests that selection of drag coefficients should take into consideration a variety of geometric factors.more » Furthermore, CFD simulations demonstrate large time-varying loads due to vortex shedding, which FAST's hydrodynamic module, HydroDyn, does not model. The implications of these oscillatory loads on the fatigue life needs to be addressed.« less

  12. Quantum coding with finite resources.

    PubMed

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M

    2016-05-09

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances.

  13. Quantum coding with finite resources

    PubMed Central

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M.

    2016-01-01

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances. PMID:27156995

  14. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less

  15. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  16. Validating diagnostic information on the Minimum Data Set in Ontario Hospital-based long-term care.

    PubMed

    Wodchis, Walter P; Naglie, Gary; Teare, Gary F

    2008-08-01

    Over 20 countries currently use the Minimum Data Set Resident Assessment Instrument (MDS) in long-term care settings for care planning, policy, and research purposes. A full assessment of the quality of the diagnostic information recorded on the MDS is lacking. The primary goal of this study was to examine the quality of diagnostic coding on the MDS. Subjects for this study were admitted to Ontario Complex Continuing Care Hospitals (CCC) directly from acute hospitals between April 1, 1997 and March 31, 2005 (n = 80,664). Encrypted unique identifiers, common across acute and CCC administrative databases, were used to link administrative records for patients in the sample. After linkage, each resident had 2 sources of diagnostic information: the acute discharge abstract database and the MDS. Using the discharge abstract database as the reference standard, we calculated the sensitivity for each of 43 MDS diagnoses. Compared with primary diagnoses coded in acute care abstracts, 12 of 43 MDS diagnoses attained a sensitivity of at least 0.80, including 7 of the 10 diagnoses with the highest prevalence as an acute care primary diagnosis before CCC admission. Although the sensitivity was high for many of the most prevalent conditions, important diagnostic information is missed increasing the potential for suboptimal clinical care. Emphasis needs to be put on improving information flow across care settings during patient transitions. Researchers should exercise caution when using MDS diagnoses to identify patient populations, particularly those shown to have low sensitivity in this study.

  17. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  18. Initial Kernel Timing Using a Simple PIM Performance Model

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David

    2005-01-01

    This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.

  19. Automated constraint checking of spacecraft command sequences

    NASA Astrophysics Data System (ADS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Spitale, Joseph M.; Le, Dang

    1995-01-01

    Robotic spacecraft are controlled by onboard sets of commands called "sequences." Determining that sequences will have the desired effect on the spacecraft can be expensive in terms of both labor and computer coding time, with different particular costs for different types of spacecraft. Specification languages and appropriate user interface to the languages can be used to make the most effective use of engineering validation time. This paper describes one specification and verification environment ("SAVE") designed for validating that command sequences have not violated any flight rules. This SAVE system was subsequently adapted for flight use on the TOPEX/Poseidon spacecraft. The relationship of this work to rule-based artificial intelligence and to other specification techniques is discussed, as well as the issues that arise in the transfer of technology from a research prototype to a full flight system.

  20. Increased registration of hypertension and cancer diagnoses after the introduction of a new reimbursement system

    PubMed Central

    Hjerpe, Per; Boström, Kristina Bengtsson; Lindblad, Ulf; Merlo, Juan

    2012-01-01

    Objective To investigate the impact on ICD coding behaviour of a new case-mix reimbursement system based on coded patient diagnoses. The main hypothesis was that after the introduction of the new system the coding of chronic diseases like hypertension and cancer would increase and the variance in propensity for coding would decrease on both physician and health care centre (HCC) levels. Design Cross-sectional multilevel logistic regression analyses were performed in periods covering the time before and after the introduction of the new reimbursement system. Setting Skaraborg primary care, Sweden. Subjects All patients (n = 76 546 to 79 826) 50 years of age and older visiting 468 to 627 physicians at the 22 public HCCs in five consecutive time periods of one year each. Main outcome measures Registered codes for hypertension and cancer diseases in Skaraborg primary care database (SPCD). Results After the introduction of the new reimbursement system the adjusted prevalence of hypertension and cancer in SPCD increased from 17.4% to 32.2% and from 0.79% to 2.32%, respectively, probably partly due to an increased diagnosis coding of indirect patient contacts. The total variance in the propensity for coding declined simultaneously at the physician level for both diagnosis groups. Conclusions Changes in the healthcare reimbursement system may directly influence the contents of a research database that retrieves data from clinical practice. This should be taken into account when using such a database for research purposes, and the data should be validated for each diagnosis. PMID:23130878

  1. Applying Classification Trees to Hospital Administrative Data to Identify Patients with Lower Gastrointestinal Bleeding

    PubMed Central

    Siddique, Juned; Ruhnke, Gregory W.; Flores, Andrea; Prochaska, Micah T.; Paesch, Elizabeth; Meltzer, David O.; Whelan, Chad T.

    2015-01-01

    Background Lower gastrointestinal bleeding (LGIB) is a common cause of acute hospitalization. Currently, there is no accepted standard for identifying patients with LGIB in hospital administrative data. The objective of this study was to develop and validate a set of classification algorithms that use hospital administrative data to identify LGIB. Methods Our sample consists of patients admitted between July 1, 2001 and June 30, 2003 (derivation cohort) and July 1, 2003 and June 30, 2005 (validation cohort) to the general medicine inpatient service of the University of Chicago Hospital, a large urban academic medical center. Confirmed cases of LGIB in both cohorts were determined by reviewing the charts of those patients who had at least 1 of 36 principal or secondary International Classification of Diseases, Ninth revision, Clinical Modification (ICD-9-CM) diagnosis codes associated with LGIB. Classification trees were used on the data of the derivation cohort to develop a set of decision rules for identifying patients with LGIB. These rules were then applied to the validation cohort to assess their performance. Results Three classification algorithms were identified and validated: a high specificity rule with 80.1% sensitivity and 95.8% specificity, a rule that balances sensitivity and specificity (87.8% sensitivity, 90.9% specificity), and a high sensitivity rule with 100% sensitivity and 91.0% specificity. Conclusion These classification algorithms can be used in future studies to evaluate resource utilization and assess outcomes associated with LGIB without the use of chart review. PMID:26406318

  2. Validity of registration of ICD codes and prescriptions in a research database in Swedish primary care: a cross-sectional study in Skaraborg primary care database

    PubMed Central

    2010-01-01

    Background In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. Methods SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). Results For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease. The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Conclusions Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found. PMID:20416069

  3. Validity of registration of ICD codes and prescriptions in a research database in Swedish primary care: a cross-sectional study in Skaraborg primary care database.

    PubMed

    Hjerpe, Per; Merlo, Juan; Ohlsson, Henrik; Bengtsson Boström, Kristina; Lindblad, Ulf

    2010-04-23

    In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease.The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found.

  4. Validity of the coding for herpes simplex encephalitis in the Danish National Patient Registry

    PubMed Central

    Jørgensen, Laura Krogh; Dalgaard, Lars Skov; Østergaard, Lars Jørgen; Andersen, Nanna Skaarup; Nørgaard, Mette; Mogensen, Trine Hyrup

    2016-01-01

    Background Large health care databases are a valuable source of infectious disease epidemiology if diagnoses are valid. The aim of this study was to investigate the accuracy of the recorded diagnosis coding of herpes simplex encephalitis (HSE) in the Danish National Patient Registry (DNPR). Methods The DNPR was used to identify all hospitalized patients, aged ≥15 years, with a first-time diagnosis of HSE according to the International Classification of Diseases, tenth revision (ICD-10), from 2004 to 2014. To validate the coding of HSE, we collected data from the Danish Microbiology Database, from departments of clinical microbiology, and from patient medical records. Cases were classified as confirmed, probable, or no evidence of HSE. We estimated the positive predictive value (PPV) of the HSE diagnosis coding stratified by diagnosis type, study period, and department type. Furthermore, we estimated the proportion of HSE cases coded with nonspecific ICD-10 codes of viral encephalitis and also the sensitivity of the HSE diagnosis coding. Results We were able to validate 398 (94.3%) of the 422 HSE diagnoses identified via the DNPR. Hereof, 202 (50.8%) were classified as confirmed cases and 29 (7.3%) as probable cases providing an overall PPV of 58.0% (95% confidence interval [CI]: 53.0–62.9). For “Encephalitis due to herpes simplex virus” (ICD-10 code B00.4), the PPV was 56.6% (95% CI: 51.1–62.0). Similarly, the PPV for “Meningoencephalitis due to herpes simplex virus” (ICD-10 code B00.4A) was 56.8% (95% CI: 39.5–72.9). “Herpes viral encephalitis” (ICD-10 code G05.1E) had a PPV of 75.9% (95% CI: 56.5–89.7), thereby representing the highest PPV. The estimated sensitivity was 95.5%. Conclusion The PPVs of the ICD-10 diagnosis coding for adult HSE in the DNPR were relatively low. Hence, the DNPR should be used with caution when studying patients with encephalitis caused by herpes simplex virus. PMID:27330328

  5. Validation of Diagnostic Groups Based on Health Care Utilization Data Should Adjust for Sampling Strategy.

    PubMed

    Cadieux, Geneviève; Tamblyn, Robyn; Buckeridge, David L; Dendukuri, Nandini

    2017-08-01

    Valid measurement of outcomes such as disease prevalence using health care utilization data is fundamental to the implementation of a "learning health system." Definitions of such outcomes can be complex, based on multiple diagnostic codes. The literature on validating such data demonstrates a lack of awareness of the need for a stratified sampling design and corresponding statistical methods. We propose a method for validating the measurement of diagnostic groups that have: (1) different prevalences of diagnostic codes within the group; and (2) low prevalence. We describe an estimation method whereby: (1) low-prevalence diagnostic codes are oversampled, and the positive predictive value (PPV) of the diagnostic group is estimated as a weighted average of the PPV of each diagnostic code; and (2) claims that fall within a low-prevalence diagnostic group are oversampled relative to claims that are not, and bias-adjusted estimators of sensitivity and specificity are generated. We illustrate our proposed method using an example from population health surveillance in which diagnostic groups are applied to physician claims to identify cases of acute respiratory illness. Failure to account for the prevalence of each diagnostic code within a diagnostic group leads to the underestimation of the PPV, because low-prevalence diagnostic codes are more likely to be false positives. Failure to adjust for oversampling of claims that fall within the low-prevalence diagnostic group relative to those that do not leads to the overestimation of sensitivity and underestimation of specificity.

  6. Validation of administrative data used for the diagnosis of upper gastrointestinal events following nonsteroidal anti-inflammatory drug prescription.

    PubMed

    Abraham, N S; Cohen, D C; Rivers, B; Richardson, P

    2006-07-15

    To validate veterans affairs (VA) administrative data for the diagnosis of nonsteroidal anti-inflammatory drug (NSAID)-related upper gastrointestinal events (UGIE) and to develop a diagnostic algorithm. A retrospective study of veterans prescribed an NSAID as identified from the national pharmacy database merged with in-patient and out-patient data, followed by primary chart abstraction. Contingency tables were constructed to allow comparison with a random sample of patients prescribed an NSAID, but without UGIE. Multivariable logistic regression analysis was used to derive a predictive algorithm. Once derived, the algorithm was validated in a separate cohort of veterans. Of 906 patients, 606 had a diagnostic code for UGIE; 300 were a random subsample of 11 744 patients (control). Only 161 had a confirmed UGIE. The positive predictive value (PPV) of diagnostic codes was poor, but improved from 27% to 51% with the addition of endoscopic procedural codes. The strongest predictors of UGIE were an in-patient ICD-9 code for gastric ulcer, duodenal ulcer and haemorrhage combined with upper endoscopy. This algorithm had a PPV of 73% when limited to patients >or=65 years (c-statistic 0.79). Validation of the algorithm revealed a PPV of 80% among patients with an overlapping NSAID prescription. NSAID-related UGIE can be assessed using VA administrative data. The optimal algorithm includes an in-patient ICD-9 code for gastric or duodenal ulcer and gastrointestinal bleeding combined with a procedural code for upper endoscopy.

  7. Overview of hypersonic CFD code calibration studies

    NASA Technical Reports Server (NTRS)

    Miller, Charles G.

    1987-01-01

    The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.

  8. Assessing preschoolers interactive behaviour: A validation study of the "Coding System for Mother-Child Interaction".

    PubMed

    Baiao, R; Baptista, J; Carneiro, A; Pinto, R; Toscano, C; Fearon, P; Soares, I; Mesquita, A R

    2018-07-01

    The preschool years are a period of great developmental achievements, which impact critically on a child's interactive skills. Having valid and reliable measures to assess interactive behaviour at this stage is therefore crucial. The aim of this study was to describe the adaptation and validation of the child coding of the Coding System for Mother-Child Interactions and discuss its applications and implications in future research and practice. Two hundred twenty Portuguese preschoolers and their mothers were videotaped during a structured task. Child and mother interactive behaviours were coded based on the task. Maternal reports on the child's temperament and emotional and behaviour problems were also collected, along with family psychosocial information. Interrater agreement was confirmed. The use of child Cooperation, Enthusiasm, and Negativity as subscales was supported by their correlations across tasks. Moreover, these subscales were correlated with each other, which supports the use of a global child interactive behaviour score. Convergent validity with a measure of emotional and behavioural problems (Child Behaviour Checklist 1 ½-5) was established, as well as divergent validity with a measure of temperament (Children's Behaviour Questionnaire-Short Form). Regarding associations with family variables, child interactive behaviour was only associated with maternal behaviour. Findings suggest that this coding system is a valid and reliable measure for assessing child interactive behaviour in preschool age children. It therefore represents an important alternative to this area of research and practice, with reduced costs and with more flexible training requirements. Attention should be given in future research to expanding this work to clinical populations and different age groups. © 2018 John Wiley & Sons Ltd.

  9. Verification, Validation, and Solution Quality in Computational Physics: CFD Methods Applied to Ice Sheet Physics

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    2005-01-01

    Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.

  10. CASL Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousseau, Vincent Andrew; Dinh, Nam

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation andmore » verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.« less

  11. Perioperative cardiopulmonary arrest competencies.

    PubMed

    Murdock, Darlene B

    2013-08-01

    Although basic life support skills are not often needed in the surgical setting, it is crucial that surgical team members understand their roles and are ready to intervene swiftly and effectively if necessary. Ongoing education and training are key elements to equip surgical team members with the skills and knowledge they need to handle untimely and unexpected life-threatening scenarios in the perioperative setting. Regular emergency cardiopulmonary arrest skills education, including the use of checklists, and mock codes are ways to validate that team members understand their responsibilities and are competent to help if an arrest occurs in the OR. After a mock drill, a debriefing session can help team members discuss and critique their performances and improve their knowledge and mastery of skills. Copyright © 2013 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  12. Electronic medical record: research tool for pancreatic cancer?

    PubMed

    Arous, Edward J; McDade, Theodore P; Smith, Jillian K; Ng, Sing Chau; Sullivan, Mary E; Zottola, Ralph J; Ranauro, Paul J; Shah, Shimul A; Whalen, Giles F; Tseng, Jennifer F

    2014-04-01

    A novel data warehouse based on automated retrieval from an institutional health care information system (HIS) was made available to be compared with a traditional prospectively maintained surgical database. A newly established institutional data warehouse at a single-institution academic medical center autopopulated by HIS was queried for International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnosis codes for pancreatic neoplasm. Patients with ICD-9-CM diagnosis codes for pancreatic neoplasm were captured. A parallel query was performed using a prospective database populated by manual entry. Duplicated patients and those unique to either data set were identified. All patients were manually reviewed to determine the accuracy of diagnosis. A total of 1107 patients were identified from the HIS-linked data set with pancreatic neoplasm from 1999-2009. Of these, 254 (22.9%) patients were also captured by the surgical database, whereas 853 (77.1%) patients were only in the HIS-linked data set. Manual review of the HIS-only group demonstrated that 45.0% of patients were without identifiable pancreatic pathology, suggesting erroneous capture, whereas 36.3% of patients were consistent with pancreatic neoplasm and 18.7% with other pancreatic pathology. Of the 394 patients identified by the surgical database, 254 (64.5%) patients were captured by HIS, whereas 140 (35.5%) patients were not. Manual review of patients only captured by the surgical database demonstrated 85.9% with pancreatic neoplasm and 14.1% with other pancreatic pathology. Finally, review of the 254 patient overlap demonstrated that 80.3% of patients had pancreatic neoplasm and 19.7% had other pancreatic pathology. These results suggest that cautious interpretation of administrative data rely only on ICD-9-CM diagnosis codes and clinical correlation through previously validated mechanisms. Published by Elsevier Inc.

  13. Transcriptome discovery in non-model wild fish species for the development of quantitative transcript abundance assays.

    PubMed

    Hahn, Cassidy M; Iwanowicz, Luke R; Cornman, Robert S; Mazik, Patricia M; Blazer, Vicki S

    2016-12-01

    Environmental studies increasingly identify the presence of both contaminants of emerging concern (CECs) and legacy contaminants in aquatic environments; however, the biological effects of these compounds on resident fishes remain largely unknown. High throughput methodologies were employed to establish partial transcriptomes for three wild-caught, non-model fish species; smallmouth bass (Micropterus dolomieu), white sucker (Catostomus commersonii) and brown bullhead (Ameiurus nebulosus). Sequences from these transcriptome databases were utilized in the development of a custom nCounter CodeSet that allowed for direct multiplexed measurement of 50 transcript abundance endpoints in liver tissue. Sequence information was also utilized in the development of quantitative real-time PCR (qPCR) primers. Cross-species hybridization allowed the smallmouth bass nCounter CodeSet to be used for quantitative transcript abundance analysis of an additional non-model species, largemouth bass (Micropterus salmoides). We validated the nCounter analysis data system with qPCR for a subset of genes and confirmed concordant results. Changes in transcript abundance biomarkers between sexes and seasons were evaluated to provide baseline data on transcript modulation for each species of interest. Published by Elsevier Inc.

  14. DXRaySMCS: a user-friendly interface developed for prediction of diagnostic radiology X-ray spectra produced by Monte Carlo (MCNP-4C) simulation.

    PubMed

    Bahreyni Toossi, M T; Moradi, H; Zare, H

    2008-01-01

    In this work, the general purpose Monte Carlo N-particle radiation transport computer code (MCNP-4C) was used for the simulation of X-ray spectra in diagnostic radiology. The electron's path in the target was followed until its energy was reduced to 10 keV. A user-friendly interface named 'diagnostic X-ray spectra by Monte Carlo simulation (DXRaySMCS)' was developed to facilitate the application of MCNP-4C code for diagnostic radiology spectrum prediction. The program provides a user-friendly interface for: (i) modifying the MCNP input file, (ii) launching the MCNP program to simulate electron and photon transport and (iii) processing the MCNP output file to yield a summary of the results (relative photon number per energy bin). In this article, the development and characteristics of DXRaySMCS are outlined. As part of the validation process, output spectra for 46 diagnostic radiology system settings produced by DXRaySMCS were compared with the corresponding IPEM78. Generally, there is a good agreement between the two sets of spectra. No statistically significant differences have been observed between IPEM78 reported spectra and the simulated spectra generated in this study.

  15. Method for coding low entrophy data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1995-01-01

    A method of lossless data compression for efficient coding of an electronic signal of information sources of very low information rate is disclosed. In this method, S represents a non-negative source symbol set, (s(sub 0), s(sub 1), s(sub 2), ..., s(sub N-1)) of N symbols with s(sub i) = i. The difference between binary digital data is mapped into symbol set S. Consecutive symbols in symbol set S are then paired into a new symbol set Gamma which defines a non-negative symbol set containing the symbols (gamma(sub m)) obtained as the extension of the original symbol set S. These pairs are then mapped into a comma code which is defined as a coding scheme in which every codeword is terminated with the same comma pattern, such as a 1. This allows a direct coding and decoding of the n-bit positive integer digital data differences without the use of codebooks.

  16. Task representation in individual and joint settings

    PubMed Central

    Prinz, Wolfgang

    2015-01-01

    This paper outlines a framework for task representation and discusses applications to interference tasks in individual and joint settings. The framework is derived from the Theory of Event Coding (TEC). This theory regards task sets as transient assemblies of event codes in which stimulus and response codes interact and shape each other in particular ways. On the one hand, stimulus and response codes compete with each other within their respective subsets (horizontal interactions). On the other hand, stimulus and response code cooperate with each other (vertical interactions). Code interactions instantiating competition and cooperation apply to two time scales: on-line performance (i.e., doing the task) and off-line implementation (i.e., setting the task). Interference arises when stimulus and response codes overlap in features that are irrelevant for stimulus identification, but relevant for response selection. To resolve this dilemma, the feature profiles of event codes may become restructured in various ways. The framework is applied to three kinds of interference paradigms. Special emphasis is given to joint settings where tasks are shared between two participants. Major conclusions derived from these applications include: (1) Response competition is the chief driver of interference. Likewise, different modes of response competition give rise to different patterns of interference; (2) The type of features in which stimulus and response codes overlap is also a crucial factor. Different types of such features give likewise rise to different patterns of interference; and (3) Task sets for joint settings conflate intraindividual conflicts between responses (what), with interindividual conflicts between responding agents (whom). Features of response codes may, therefore, not only address responses, but also responding agents (both physically and socially). PMID:26029085

  17. FY2012 summary of tasks completed on PROTEUS-thermal work.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.H.; Smith, M.A.

    2012-06-06

    PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less

  18. Validation Results for LEWICE 2.0. [Supplement

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Rutkowski, Adam

    1999-01-01

    Two CD-ROMs contain experimental ice shapes and code prediction used for validation of LEWICE 2.0 (see NASA/CR-1999-208690, CASI ID 19990021235). The data include ice shapes for both experiment and for LEWICE, all of the input and output files for the LEWICE cases, JPG files of all plots generated, an electronic copy of the text of the validation report, and a Microsoft Excel(R) spreadsheet containing all of the quantitative measurements taken. The LEWICE source code and executable are not contained on the discs.

  19. Verification and Validation of the BISON Fuel Performance Code for PCMI Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamble, Kyle Allan Lawrence; Novascone, Stephen Rhead; Gardner, Russell James

    2016-06-01

    BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described. Validation for application to light water reactor (LWR) PCMI problems is assessed by comparing predicted and measured rod diameter following base irradiation andmore » power ramps. Results indicate a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. Initial rod diameter comparisons have led to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less

  20. A Severe Sepsis Mortality Prediction Model and Score for Use with Administrative Data

    PubMed Central

    Ford, Dee W.; Goodwin, Andrew J.; Simpson, Annie N.; Johnson, Emily; Nadig, Nandita; Simpson, Kit N.

    2016-01-01

    Objective Administrative data is used for research, quality improvement, and health policy in severe sepsis. However, there is not a sepsis-specific tool applicable to administrative data with which to adjust for illness severity. Our objective was to develop, internally validate, and externally validate a severe sepsis mortality prediction model and associated mortality prediction score. Design Retrospective cohort study using 2012 administrative data from five US states. Three cohorts of patients with severe sepsis were created: 1) ICD-9-CM codes for severe sepsis/septic shock, 2) ‘Martin’ approach, and 3) ‘Angus’ approach. The model was developed and internally validated in ICD-9-CM cohort and externally validated in other cohorts. Integer point values for each predictor variable were generated to create a sepsis severity score. Setting Acute care, non-federal hospitals in NY, MD, FL, MI, and WA Subjects Patients in one of three severe sepsis cohorts: 1) explicitly coded (n=108,448), 2) Martin cohort (n=139,094), and 3) Angus cohort (n=523,637) Interventions None Measurements and Main Results Maximum likelihood estimation logistic regression to develop a predictive model for in-hospital mortality. Model calibration and discrimination assessed via Hosmer-Lemeshow goodness-of-fit (GOF) and C-statistics respectively. Primary cohort subset into risk deciles and observed versus predicted mortality plotted. GOF demonstrated p>0.05 for each cohort demonstrating sound calibration. C-statistic ranged from low of 0.709 (sepsis severity score) to high of 0.838 (Angus cohort) suggesting good to excellent model discrimination. Comparison of observed versus expected mortality was robust although accuracy decreased in highest risk decile. Conclusions Our sepsis severity model and score is a tool that provides reliable risk adjustment for administrative data. PMID:26496452

  1. Colour cyclic code for Brillouin distributed sensors

    NASA Astrophysics Data System (ADS)

    Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne

    2015-09-01

    For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.

  2. Validation of NASA Thermal Ice Protection Computer Codes. Part 3; The Validation of Antice

    NASA Technical Reports Server (NTRS)

    Al-Khalil, Kamel M.; Horvath, Charles; Miller, Dean R.; Wright, William B.

    2001-01-01

    An experimental program was generated by the Icing Technology Branch at NASA Glenn Research Center to validate two ice protection simulation codes: (1) LEWICE/Thermal for transient electrothermal de-icing and anti-icing simulations, and (2) ANTICE for steady state hot gas and electrothermal anti-icing simulations. An electrothermal ice protection system was designed and constructed integral to a 36 inch chord NACA0012 airfoil. The model was fully instrumented with thermo-couples, RTD'S, and heat flux gages. Tests were conducted at several icing environmental conditions during a two week period at the NASA Glenn Icing Research Tunnel. Experimental results of running-wet and evaporative cases were compared to the ANTICE computer code predictions and are presented in this paper.

  3. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. An algorithm to identify rheumatoid arthritis in primary care: a Clinical Practice Research Datalink study

    PubMed Central

    Muller, Sara; Hider, Samantha L; Raza, Karim; Stack, Rebecca J; Hayward, Richard A; Mallen, Christian D

    2015-01-01

    Objective Rheumatoid arthritis (RA) is a multisystem, inflammatory disorder associated with increased levels of morbidity and mortality. While much research into the condition is conducted in the secondary care setting, routinely collected primary care databases provide an important source of research data. This study aimed to update an algorithm to define RA that was previously developed and validated in the General Practice Research Database (GPRD). Methods The original algorithm consisted of two criteria. Individuals meeting at least one were considered to have RA. Criterion 1: ≥1 RA Read code and a disease modifying antirheumatic drug (DMARD) without an alternative indication. Criterion 2: ≥2 RA Read codes, with at least one ‘strong’ code and no alternative diagnoses. Lists of codes for consultations and prescriptions were obtained from the authors of the original algorithm where these were available, or compiled based on the original description and clinical knowledge. 4161 people with a first Read code for RA between 1 January 2010 and 31 December 2012 were selected from the Clinical Practice Research Datalink (CPRD, successor to the GPRD), and the criteria applied. Results Code lists were updated for the introduction of new Read codes and biological DMARDs. 3577/4161 (86%) of people met the updated algorithm for RA, compared to 61% in the original development study. 62.8% of people fulfilled both Criterion 1 and Criterion 2. Conclusions Those wishing to define RA in the CPRD, should consider using this updated algorithm, rather than a single RA code, if they wish to identify only those who are most likely to have RA. PMID:26700281

  5. Initial verification and validation of RAZORBACK - A research reactor transient analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talley, Darren G.

    2015-09-01

    This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less

  6. The Value of Electronically Extracted Data for Auditing Outpatient Antimicrobial Prescribing.

    PubMed

    Livorsi, Daniel J; Linn, Carrie M; Alexander, Bruce; Heintz, Brett H; Tubbs, Traviss A; Perencevich, Eli N

    2018-01-01

    OBJECTIVE The optimal approach to auditing outpatient antimicrobial prescribing has not been established. We assessed how different types of electronic data-including prescriptions, patient-visits, and International Classification of Disease, Tenth Revision (ICD-10) codes-could inform automated antimicrobial audits. DESIGN Outpatient visits during 2016 were retrospectively reviewed, including chart abstraction, if an antimicrobial was prescribed (cohort 1) or if the visit was associated with an infection-related ICD-10 code (cohort 2). Findings from cohorts 1 and 2 were compared. SETTING Primary care clinics and the emergency department (ED) at the Iowa City Veterans Affairs Medical Center. RESULTS In cohort 1, we reviewed 2,353 antimicrobial prescriptions across 52 providers. ICD-10 codes had limited sensitivity and positive predictive value (PPV) for validated cases of cystitis and pneumonia (sensitivity, 65.8%, 56.3%, respectively; PPV, 74.4%, 52.5%, respectively). The volume-adjusted antimicrobial prescribing rate was 13.6 per 100 ED visits and 7.5 per 100 primary care visits. In cohort 2, antimicrobials were not indicated in 474 of 851 visits (55.7%). The antimicrobial overtreatment rate was 48.8% for the ED and 59.7% for primary care. At the level of the individual prescriber, there was a positive correlation between a provider's volume-adjusted antimicrobial prescribing rate and the individualized rates of overtreatment in both the ED (r=0.72; P<.01) and the primary care setting (r=0.82; P=0.03). CONCLUSIONS In this single-center study, ICD-10 codes had limited sensitivity and PPV for 2 infections that typically require antimicrobials. Electronically extracted data on a provider's rate of volume-adjusted antimicrobial prescribing correlated with the frequency at which unnecessary antimicrobials were prescribed, but this may have been driven by outlier prescribers. Infect Control Hosp Epidemiol 2018;39:64-70.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamek, Julian; Daverio, David; Durrer, Ruth

    We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less

  8. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  9. Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems

    NASA Astrophysics Data System (ADS)

    Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.

    2008-08-01

    This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.

  10. Video transmission on ATM networks. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  11. Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.

    PubMed

    Houston, Lauren; Probst, Yasmine; Humphries, Allison

    2015-01-01

    Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.

  12. From pole to pole: 33 years of physical oceanography onboard R/V Polarstern

    NASA Astrophysics Data System (ADS)

    Driemel, Amelie; Fahrbach, Eberhard; Rohardt, Gerd; Beszczynska-Möller, Agnieszka; Boetius, Antje; Budéus, Gereon; Cisewski, Boris; Engbrodt, Ralph; Gauger, Steffen; Geibert, Walter; Geprägs, Patrizia; Gerdes, Dieter; Gersonde, Rainer; Gordon, Arnold L.; Grobe, Hannes; Hellmer, Hartmut H.; Isla, Enrique; Jacobs, Stanley S.; Janout, Markus; Jokat, Wilfried; Klages, Michael; Kuhn, Gerhard; Meincke, Jens; Ober, Sven; Østerhus, Svein; Peterson, Ray G.; Rabe, Benjamin; Rudels, Bert; Schauer, Ursula; Schröder, Michael; Schumacher, Stefanie; Sieger, Rainer; Sildam, Jüri; Soltwedel, Thomas; Stangeew, Elena; Stein, Manfred; Strass, Volker H.; Thiede, Jörn; Tippenhauer, Sandra; Veth, Cornelis; von Appen, Wilken-Jon; Weirig, Marie-France; Wisotzki, Andreas; Wolf-Gladrow, Dieter A.; Kanzow, Torsten

    2017-03-01

    Measuring temperature and salinity profiles in the world's oceans is crucial to understanding ocean dynamics and its influence on the heat budget, the water cycle, the marine environment and on our climate. Since 1983 the German research vessel and icebreaker Polarstern has been the platform of numerous CTD (conductivity, temperature, depth instrument) deployments in the Arctic and the Antarctic. We report on a unique data collection spanning 33 years of polar CTD data. In total 131 data sets (1 data set per cruise leg) containing data from 10 063 CTD casts are now freely available at doi:10.1594/PANGAEA.860066. During this long period five CTD types with different characteristics and accuracies have been used. Therefore the instruments and processing procedures (sensor calibration, data validation, etc.) are described in detail. This compilation is special not only with regard to the quantity but also the quality of the data - the latter indicated for each data set using defined quality codes. The complete data collection includes a number of repeated sections for which the quality code can be used to investigate and evaluate long-term changes. Beginning with 2010, the salinity measurements presented here are of the highest quality possible in this field owing to the introduction of the OPTIMARE Precision Salinometer.

  13. The impact of conventional dietary intake data coding methods on foods typically consumed by low-income African-American and White urban populations

    PubMed Central

    Mason, Marc A; Kuczmarski, Marie Fanelli; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K

    2016-01-01

    Objective Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Design Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Setting Healthy Aging in Neighborhoods of Diversity across the Life Span study. Subjects African-American and White adults with two dietary recalls (n 2177). Results Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Conclusions Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet–health relationships. PMID:25435191

  14. ESTEST: A Framework for the Verification and Validation of Electronic Structure Codes

    NASA Astrophysics Data System (ADS)

    Yuan, Gary; Gygi, Francois

    2011-03-01

    ESTEST is a verification and validation (V& V) framework for electronic structure codes that supports Qbox, Quantum Espresso, ABINIT, the Exciting Code and plans support for many more. We discuss various approaches to the electronic structure V& V problem implemented in ESTEST, that are related to parsing, formats, data management, search, comparison and analyses. Additionally, an early experiment in the distribution of V& V ESTEST servers among the electronic structure community will be presented. Supported by NSF-OCI 0749217 and DOE FC02-06ER25777.

  15. Supersonic Coaxial Jet Experiment for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; Carty, A. A.; Doerner, S. E.; Diskin, G. S.; Drummond, J. P.

    1999-01-01

    A supersonic coaxial jet facility has been designed to provide experimental data suitable for the validation of CFD codes used to analyze high-speed propulsion flows. The center jet is of a light gas and the coflow jet is of air, and the mixing layer between them is compressible. Various methods have been employed in characterizing the jet flow field, including schlieren visualization, pitot, total temperature and gas sampling probe surveying, and RELIEF velocimetry. A Navier-Stokes code has been used to calculate the nozzle flow field and the results compared to the experiment.

  16. Modeling and Prediction of the Noise from Non-Axisymmetric Jets

    NASA Technical Reports Server (NTRS)

    Leib, Stewart J.

    2014-01-01

    The new source model was combined with the original sound propagation model developed for rectangular jets to produce a new version of the rectangular jet noise prediction code. This code was validated using a set of rectangular nozzles whose geometries were specified by NASA. Nozzles of aspect ratios two, four and eight were studied at jet exit Mach numbers of 0.5, 0.7 and 0.9, for a total of nine cases. Reynolds-averaged Navier-Stokes solutions for these jets were provided to the contactor for use as input to the code. Quantitative comparisons of the predicted azimuthal and polar directivity of the acoustic spectrum were made with experimental data provided by NASA. The results of these comparisons, along with a documentation of the propagation and source models, were reported in a journal article publication (Ref. 4). The complete set of computer codes and computational modules that make up the prediction scheme, along with a user's guide describing their use and example test cases, was provided to NASA as a deliverable of this task. The use of conformal mapping, along with simplified modeling of the mean flow field, for noise propagation modeling was explored for other nozzle geometries, to support the task milestone of developing methods which are applicable to other geometries and flow conditions of interest to NASA. A model to represent twin round jets using this approach was formulated and implemented. A general approach to solving the equations governing sound propagation in a locally parallel nonaxisymmetric jet was developed and implemented, in aid of the tasks and milestones charged with selecting more exact numerical methods for modeling sound propagation, and developing methods that have application to other nozzle geometries. The method is based on expansion of both the mean-flowdependent coefficients in the governing equation and the Green's function in series of orthogonal functions. The method was coded and tested on two analytically prescribed mean flows which were meant to represent noise reduction concepts being considered by NASA. Testing (Ref. 5) showed that the method was feasible for the types of mean flows of interest in jet noise applications. Subsequently, this method was further developed to allow use of mean flow profiles obtained from a Reynolds-averaged Navier-Stokes (RANS) solution of the flow. Preliminary testing of the generalized code was among the last tasks completed under this contract. The stringent noise-reduction goals of NASA's Fundamental Aeronautics Program suggest that, in addition to potentially complex exhaust nozzle geometries, next generation aircraft will also involve tighter integration of the engine with the airframe. Therefore, noise generated and propagated by jet flows in the vicinity of solid surfaces is expected to be quite significant, and reduced-order noise prediction tools will be needed that can deal with such geometries. One important source of noise is that generated by the interaction of a turbulent jet with the edge of a solid surface (edge noise). Such noise is generated, for example, by the passing of the engine exhaust over a shielding surface, such as a wing. Work under this task supported an effort to develop a RANS-based prediction code for edge noise based on an extension of the classical Rapid Distortion Theory (RDT) to transversely sheared base flows (Refs. 6 and 7). The RDT-based theoretical analysis was applied to the generic problem of a turbulent jet interacting with the trailing edge of a flat plate. A code was written to evaluate the formula derived for the spectrum of the noise produced by this interaction and results were compared with data taken at NASA Glenn for a variety of jet/plate configurations and flow conditions (Ref. 8). A longer-term goal of this task was to work toward the development of a high-fidelity model of sound propagation in spatially developing non-axisymmetric jets using direct numerical methods for solving the relevant equations. Working with NASA Glenn Acoustics Branch personnel, numerical methods and boundary conditions appropriate for use in a high-resolution calculation of the full equations governing sound propagation in a steady base flow were identified. Computer codes were then written (by NASA) and tested (by OAI) for an increasingly complex set of flow conditions to validate the methods. The NASA-supplied codes were ported to the High-End Computing resources of the NASA Advanced Supercomputing facility for testing and validation against analytical (where possible) and independent numerical solutions. The cases which were completed during the course of this contract were solutions of the two-dimensional linearized Euler equations with no mean flow, a uniform mean flow and a nonuniform mean flow representative of a parallel flow jet.

  17. Conceptual Underpinnings of the Quality of Life in Neurological Disorders (Neuro-QoL): Comparisons of Core Sets for Stroke, Multiple Sclerosis, Spinal Cord Injury, and Traumatic Brain Injury.

    PubMed

    Wong, Alex W K; Lau, Stephen C L; Fong, Mandy W M; Cella, David; Lai, Jin-Shei; Heinemann, Allen W

    2018-04-03

    To determine the extent to which the content of the Quality of Life in Neurological Disorders (Neuro-QoL) covers the International Classification of Functioning, Disability and Health (ICF) Core Sets for multiple sclerosis (MS), stroke, spinal cord injury (SCI), and traumatic brain injury (TBI) using summary linkage indicators. Content analysis by linking content of the Neuro-QoL to corresponding ICF codes of each Core Set for MS, stroke, SCI, and TBI. Three academic centers. None. None. Four summary linkage indicators proposed by MacDermid et al were estimated to compare the content coverage between Neuro-QoL and the ICF codes of Core Sets for MS, stroke, MS, and TBI. Neuro-QoL represented 20% to 30% Core Set codes for different conditions in which more codes in Core Sets for MS (29%), stroke (28%), and TBI (28%) were covered than those for SCI in the long-term (20%) and early postacute (19%) contexts. Neuro-QoL represented nearly half of the unique Activity and Participation codes (43%-49%) and less than one third of the unique Body Function codes (12%-32%). It represented fewer Environmental Factors codes (2%-6%) and no Body Structures codes. Absolute linkage indicators found that at least 60% of Neuro-QoL items were linked to Core Set codes (63%-95%), but many items covered the same codes as revealed by unique linkage indicators (7%-13%), suggesting high concept redundancy among items. The Neuro-QoL links more closely to ICF Core Sets for stroke, MS, and TBI than to those for SCI, and primarily covers activity and participation ICF domains. Other instruments are needed to address concepts not measured by the Neuro-QoL when a comprehensive health assessment is needed. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. Methodology, status and plans for development and assessment of Cathare code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestion, D.; Barre, F.; Faydide, B.

    1997-07-01

    This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less

  19. Accuracy of external cause-of-injury coding in VA polytrauma patient discharge records.

    PubMed

    Carlson, Kathleen F; Nugent, Sean M; Grill, Joseph; Sayer, Nina A

    2010-01-01

    Valid and efficient methods of identifying the etiology of treated injuries are critical for characterizing patient populations and developing prevention and rehabilitation strategies. We examined the accuracy of external cause-of-injury codes (E-codes) in Veterans Health Administration (VHA) administrative data for a population of injured patients. Chart notes and E-codes were extracted for 566 patients treated at any one of four VHA Polytrauma Rehabilitation Center sites between 2001 and 2006. Two expert coders, blinded to VHA E-codes, used chart notes to assign "gold standard" E-codes to injured patients. The accuracy of VHA E-coding was examined based on these gold standard E-codes. Only 382 of 517 (74%) injured patients were assigned E-codes in VHA records. Sensitivity of VHA E-codes varied significantly by site (range: 59%-91%, p < 0.001). Sensitivity was highest for combat-related injuries (81%) and lowest for fall-related injuries (60%). Overall specificity of E-codes was high (92%). E-coding accuracy was markedly higher when we restricted analyses to records that had been assigned VHA E-codes. E-codes may not be valid for ascertaining source-of-injury data for all injuries among VHA rehabilitation inpatients at this time. Enhanced training and policies may ensure more widespread, standardized use and accuracy of E-codes for injured veterans treated in the VHA.

  20. Case-finding for common mental disorders of anxiety and depression in primary care: an external validation of routinely collected data.

    PubMed

    John, Ann; McGregor, Joanne; Fone, David; Dunstan, Frank; Cornish, Rosie; Lyons, Ronan A; Lloyd, Keith R

    2016-03-15

    The robustness of epidemiological research using routinely collected primary care electronic data to support policy and practice for common mental disorders (CMD) anxiety and depression would be greatly enhanced by appropriate validation of diagnostic codes and algorithms for data extraction. We aimed to create a robust research platform for CMD using population-based, routinely collected primary care electronic data. We developed a set of Read code lists (diagnosis, symptoms, treatments) for the identification of anxiety and depression in the General Practice Database (GPD) within the Secure Anonymised Information Linkage Databank at Swansea University, and assessed 12 algorithms for Read codes to define cases according to various criteria. Annual incidence rates were calculated per 1000 person years at risk (PYAR) to assess recording practice for these CMD between January 1(st) 2000 and December 31(st) 2009. We anonymously linked the 2799 MHI-5 Caerphilly Health and Social Needs Survey (CHSNS) respondents aged 18 to 74 years to their routinely collected GP data in SAIL. We estimated the sensitivity, specificity and positive predictive value of the various algorithms using the MHI-5 as the gold standard. The incidence of combined depression/anxiety diagnoses remained stable over the ten-year period in a population of over 500,000 but symptoms increased from 6.5 to 20.7 per 1000 PYAR. A 'historical' GP diagnosis for depression/anxiety currently treated plus a current diagnosis (treated or untreated) resulted in a specificity of 0.96, sensitivity 0.29 and PPV 0.76. Adding current symptom codes improved sensitivity (0.32) with a marginal effect on specificity (0.95) and PPV (0.74). We have developed an algorithm with a high specificity and PPV of detecting cases of anxiety and depression from routine GP data that incorporates symptom codes to reflect GP coding behaviour. We have demonstrated that using diagnosis and current treatment alone to identify cases for depression and anxiety using routinely collected primary care data will miss a number of true cases given changes in GP recording behaviour. The Read code lists plus the developed algorithms will be applicable to other routinely collected primary care datasets, creating a platform for future e-cohort research into these conditions.

  1. Refining the accuracy of validated target identification through coding variant fine-mapping in type 2 diabetes.

    PubMed

    Mahajan, Anubha; Wessel, Jennifer; Willems, Sara M; Zhao, Wei; Robertson, Neil R; Chu, Audrey Y; Gan, Wei; Kitajima, Hidetoshi; Taliun, Daniel; Rayner, N William; Guo, Xiuqing; Lu, Yingchang; Li, Man; Jensen, Richard A; Hu, Yao; Huo, Shaofeng; Lohman, Kurt K; Zhang, Weihua; Cook, James P; Prins, Bram Peter; Flannick, Jason; Grarup, Niels; Trubetskoy, Vassily Vladimirovich; Kravic, Jasmina; Kim, Young Jin; Rybin, Denis V; Yaghootkar, Hanieh; Müller-Nurasyid, Martina; Meidtner, Karina; Li-Gao, Ruifang; Varga, Tibor V; Marten, Jonathan; Li, Jin; Smith, Albert Vernon; An, Ping; Ligthart, Symen; Gustafsson, Stefan; Malerba, Giovanni; Demirkan, Ayse; Tajes, Juan Fernandez; Steinthorsdottir, Valgerdur; Wuttke, Matthias; Lecoeur, Cécile; Preuss, Michael; Bielak, Lawrence F; Graff, Marielisa; Highland, Heather M; Justice, Anne E; Liu, Dajiang J; Marouli, Eirini; Peloso, Gina Marie; Warren, Helen R; Afaq, Saima; Afzal, Shoaib; Ahlqvist, Emma; Almgren, Peter; Amin, Najaf; Bang, Lia B; Bertoni, Alain G; Bombieri, Cristina; Bork-Jensen, Jette; Brandslund, Ivan; Brody, Jennifer A; Burtt, Noël P; Canouil, Mickaël; Chen, Yii-Der Ida; Cho, Yoon Shin; Christensen, Cramer; Eastwood, Sophie V; Eckardt, Kai-Uwe; Fischer, Krista; Gambaro, Giovanni; Giedraitis, Vilmantas; Grove, Megan L; de Haan, Hugoline G; Hackinger, Sophie; Hai, Yang; Han, Sohee; Tybjærg-Hansen, Anne; Hivert, Marie-France; Isomaa, Bo; Jäger, Susanne; Jørgensen, Marit E; Jørgensen, Torben; Käräjämäki, Annemari; Kim, Bong-Jo; Kim, Sung Soo; Koistinen, Heikki A; Kovacs, Peter; Kriebel, Jennifer; Kronenberg, Florian; Läll, Kristi; Lange, Leslie A; Lee, Jung-Jin; Lehne, Benjamin; Li, Huaixing; Lin, Keng-Hung; Linneberg, Allan; Liu, Ching-Ti; Liu, Jun; Loh, Marie; Mägi, Reedik; Mamakou, Vasiliki; McKean-Cowdin, Roberta; Nadkarni, Girish; Neville, Matt; Nielsen, Sune F; Ntalla, Ioanna; Peyser, Patricia A; Rathmann, Wolfgang; Rice, Kenneth; Rich, Stephen S; Rode, Line; Rolandsson, Olov; Schönherr, Sebastian; Selvin, Elizabeth; Small, Kerrin S; Stančáková, Alena; Surendran, Praveen; Taylor, Kent D; Teslovich, Tanya M; Thorand, Barbara; Thorleifsson, Gudmar; Tin, Adrienne; Tönjes, Anke; Varbo, Anette; Witte, Daniel R; Wood, Andrew R; Yajnik, Pranav; Yao, Jie; Yengo, Loïc; Young, Robin; Amouyel, Philippe; Boeing, Heiner; Boerwinkle, Eric; Bottinger, Erwin P; Chowdhury, Rajiv; Collins, Francis S; Dedoussis, George; Dehghan, Abbas; Deloukas, Panos; Ferrario, Marco M; Ferrières, Jean; Florez, Jose C; Frossard, Philippe; Gudnason, Vilmundur; Harris, Tamara B; Heckbert, Susan R; Howson, Joanna M M; Ingelsson, Martin; Kathiresan, Sekar; Kee, Frank; Kuusisto, Johanna; Langenberg, Claudia; Launer, Lenore J; Lindgren, Cecilia M; Männistö, Satu; Meitinger, Thomas; Melander, Olle; Mohlke, Karen L; Moitry, Marie; Morris, Andrew D; Murray, Alison D; de Mutsert, Renée; Orho-Melander, Marju; Owen, Katharine R; Perola, Markus; Peters, Annette; Province, Michael A; Rasheed, Asif; Ridker, Paul M; Rivadineira, Fernando; Rosendaal, Frits R; Rosengren, Anders H; Salomaa, Veikko; Sheu, Wayne H-H; Sladek, Rob; Smith, Blair H; Strauch, Konstantin; Uitterlinden, André G; Varma, Rohit; Willer, Cristen J; Blüher, Matthias; Butterworth, Adam S; Chambers, John Campbell; Chasman, Daniel I; Danesh, John; van Duijn, Cornelia; Dupuis, Josée; Franco, Oscar H; Franks, Paul W; Froguel, Philippe; Grallert, Harald; Groop, Leif; Han, Bok-Ghee; Hansen, Torben; Hattersley, Andrew T; Hayward, Caroline; Ingelsson, Erik; Kardia, Sharon L R; Karpe, Fredrik; Kooner, Jaspal Singh; Köttgen, Anna; Kuulasmaa, Kari; Laakso, Markku; Lin, Xu; Lind, Lars; Liu, Yongmei; Loos, Ruth J F; Marchini, Jonathan; Metspalu, Andres; Mook-Kanamori, Dennis; Nordestgaard, Børge G; Palmer, Colin N A; Pankow, James S; Pedersen, Oluf; Psaty, Bruce M; Rauramaa, Rainer; Sattar, Naveed; Schulze, Matthias B; Soranzo, Nicole; Spector, Timothy D; Stefansson, Kari; Stumvoll, Michael; Thorsteinsdottir, Unnur; Tuomi, Tiinamaija; Tuomilehto, Jaakko; Wareham, Nicholas J; Wilson, James G; Zeggini, Eleftheria; Scott, Robert A; Barroso, Inês; Frayling, Timothy M; Goodarzi, Mark O; Meigs, James B; Boehnke, Michael; Saleheen, Danish; Morris, Andrew P; Rotter, Jerome I; McCarthy, Mark I

    2018-04-01

    We aggregated coding variant data for 81,412 type 2 diabetes cases and 370,832 controls of diverse ancestry, identifying 40 coding variant association signals (P < 2.2 × 10 -7 ); of these, 16 map outside known risk-associated loci. We make two important observations. First, only five of these signals are driven by low-frequency variants: even for these, effect sizes are modest (odds ratio ≤1.29). Second, when we used large-scale genome-wide association data to fine-map the associated variants in their regional context, accounting for the global enrichment of complex trait associations in coding sequence, compelling evidence for coding variant causality was obtained for only 16 signals. At 13 others, the associated coding variants clearly represent 'false leads' with potential to generate erroneous mechanistic inference. Coding variant associations offer a direct route to biological insight for complex diseases and identification of validated therapeutic targets; however, appropriate mechanistic inference requires careful specification of their causal contribution to disease predisposition.

  2. A Comprehensive High Performance Predictive Tool for Fusion Liquid Metal Hydromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Peter; Chhabra, Rupanshi; Munipalli, Ramakanth

    In Phase I SBIR project, HyPerComp and Texcel initiated the development of two induction-based MHD codes as a predictive tool for fusion hydro-magnetics. The newly-developed codes overcome the deficiency of other MHD codes based on the quasi static approximation by defining a more general mathematical model that utilizes the induced magnetic field rather than the electric potential as the main electromagnetic variable. The UCLA code is a finite-difference staggered-mesh code that serves as a supplementary tool to the massively-parallel finite-volume code developed by HyPerComp. As there is no suitable experimental data under blanket-relevant conditions for code validation, code-to-code comparisons andmore » comparisons against analytical solutions were successfully performed for three selected test cases: (1) lid-driven MHD flow, (2) flow in a rectangular duct in a transverse magnetic field, and (3) unsteady finite magnetic Reynolds number flow in a rectangular enclosure. The performed tests suggest that the developed codes are accurate and robust. Further work will focus on enhancing the code capabilities towards higher flow parameters and faster computations. At the conclusion of the current Phase-II Project we have completed the preliminary validation efforts in performing unsteady mixed-convection MHD flows (against limited data that is currently available in literature), and demonstrated flow behavior in large 3D channels including important geometrical features. Code enhancements such as periodic boundary conditions, unmatched mesh structures are also ready. As proposed, we have built upon these strengths and explored a much increased range of Grashof numbers and Hartmann numbers under various flow conditions, ranging from flows in a rectangular duct to prototypic blanket modules and liquid metal PFC. Parametric studies, numerical and physical model improvements to expand the scope of simulations, code demonstration, and continued validation activities have also been completed.« less

  3. Quiet, Efficient Fans for Spaceflight: An Overview of NASA's Technology Development Plan

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    2010-01-01

    A Technology Development Plan to improve the aerodynamic and acoustic performance of spaceflight fans has been submitted to NASA s Exploration Technology Development Program. The plan describes a research program intended to make broader use of the technology developed at NASA Glenn to increase the efficiency and reduce the noise of aircraft engine fans. The goal is to develop a set of well-characterized government-owned fans nominally suited for spacecraft ventilation and cooling systems. NASA s Exploration Life Support community will identify design point conditions for the fans in this study. Computational Fluid Dynamics codes will be used in the design and analysis process. The fans will be built and used in a series of tests. Data from aerodynamic and acoustic performance tests will be used to validate performance predictions. These performance maps will also be entered into a database to help spaceflight fan system developers make informed design choices. Velocity measurements downstream of fan rotor blades and stator vanes will also be collected and used for code validation. Details of the fan design, analysis, and testing will be publicly reported. With access to fan geometry and test data, the small fan industry can independently evaluate design and analysis methods and work towards improvement.

  4. Overview of the NCC

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey

    2001-01-01

    A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.

  5. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE PAGES

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...

    2018-06-14

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  6. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  7. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  8. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  9. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1997-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  10. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  11. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...

  12. Development and Validation of an Agency for Healthcare Research and Quality Indicator for Mortality After Congenital Heart Surgery Harmonized With Risk Adjustment for Congenital Heart Surgery (RACHS-1) Methodology.

    PubMed

    Jenkins, Kathy J; Koch Kupiec, Jennifer; Owens, Pamela L; Romano, Patrick S; Geppert, Jeffrey J; Gauvreau, Kimberlee

    2016-05-20

    The National Quality Forum previously approved a quality indicator for mortality after congenital heart surgery developed by the Agency for Healthcare Research and Quality (AHRQ). Several parameters of the validated Risk Adjustment for Congenital Heart Surgery (RACHS-1) method were included, but others differed. As part of the National Quality Forum endorsement maintenance process, developers were asked to harmonize the 2 methodologies. Parameters that were identical between the 2 methods were retained. AHRQ's Healthcare Cost and Utilization Project State Inpatient Databases (SID) 2008 were used to select optimal parameters where differences existed, with a goal to maximize model performance and face validity. Inclusion criteria were not changed and included all discharges for patients <18 years with International Classification of Diseases, Ninth Revision, Clinical Modification procedure codes for congenital heart surgery or nonspecific heart surgery combined with congenital heart disease diagnosis codes. The final model includes procedure risk group, age (0-28 days, 29-90 days, 91-364 days, 1-17 years), low birth weight (500-2499 g), other congenital anomalies (Clinical Classifications Software 217, except for 758.xx), multiple procedures, and transfer-in status. Among 17 945 eligible cases in the SID 2008, the c statistic for model performance was 0.82. In the SID 2013 validation data set, the c statistic was 0.82. Risk-adjusted mortality rates by center ranged from 0.9% to 4.1% (5th-95th percentile). Congenital heart surgery programs can now obtain national benchmarking reports by applying AHRQ Quality Indicator software to hospital administrative data, based on the harmonized RACHS-1 method, with high discrimination and face validity. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  13. Examples of Use of SINBAD Database for Nuclear Data and Code Validation

    NASA Astrophysics Data System (ADS)

    Kodeli, Ivan; Žerovnik, Gašper; Milocco, Alberto

    2017-09-01

    The SINBAD database currently contains compilations and evaluations of over 100 shielding benchmark experiments. The SINBAD database is widely used for code and data validation. Materials covered include: Air, N. O, H2O, Al, Be, Cu, graphite, concrete, Fe, stainless steel, Pb, Li, Ni, Nb, SiC, Na, W, V and mixtures thereof. Over 40 organisations from 14 countries and 2 international organisations have contributed data and work in support of SINBAD. Examples of the use of the database in the scope of different international projects, such as the Working Party on Evaluation Cooperation of the OECD and the European Fusion Programme demonstrate the merit and possible usage of the database for the validation of modern nuclear data evaluations and new computer codes.

  14. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  15. Methodological considerations for observational coding of eating and feeding behaviors in children and their families.

    PubMed

    Pesch, Megan H; Lumeng, Julie C

    2017-12-15

    Behavioral coding of videotaped eating and feeding interactions can provide researchers with rich observational data and unique insights into eating behaviors, food intake, food selection as well as interpersonal and mealtime dynamics of children and their families. Unlike self-report measures of eating and feeding practices, the coding of videotaped eating and feeding behaviors can allow for the quantitative and qualitative examinations of behaviors and practices that participants may not self-report. While this methodology is increasingly more common, behavioral coding protocols and methodology are not widely shared in the literature. This has important implications for validity and reliability of coding schemes across settings. Additional guidance on how to design, implement, code and analyze videotaped eating and feeding behaviors could contribute to advancing the science of behavioral nutrition. The objectives of this narrative review are to review methodology for the design, operationalization, and coding of videotaped behavioral eating and feeding data in children and their families, and to highlight best practices. When capturing eating and feeding behaviors through analysis of videotapes, it is important for the study and coding to be hypothesis driven. Study design considerations include how to best capture the target behaviors through selection of a controlled experimental laboratory environment versus home mealtime, duration of video recording, number of observations to achieve reliability across eating episodes, as well as technical issues in video recording and sound quality. Study design must also take into account plans for coding the target behaviors, which may include behavior frequency, duration, categorization or qualitative descriptors. Coding scheme creation and refinement occur through an iterative process. Reliability between coders can be challenging to achieve but is paramount to the scientific rigor of the methodology. Analysis approach is dependent on the how data were coded and collapsed. Behavioral coding of videotaped eating and feeding behaviors can capture rich data "in-vivo" that is otherwise unobtainable from self-report measures. While data collection and coding are time-intensive the data yielded can be extremely valuable. Additional sharing of methodology and coding schemes around eating and feeding behaviors could advance the science and field.

  16. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  17. Use of diagnosis codes for detection of clinically significant opioid poisoning in the emergency department: A retrospective analysis of a surveillance case definition.

    PubMed

    Reardon, Joseph M; Harmon, Katherine J; Schult, Genevieve C; Staton, Catherine A; Waller, Anna E

    2016-02-08

    Although fatal opioid poisonings tripled from 1999 to 2008, data describing nonfatal poisonings are rare. Public health authorities are in need of tools to track opioid poisonings in near real time. We determined the utility of ICD-9-CM diagnosis codes for identifying clinically significant opioid poisonings in a state-wide emergency department (ED) surveillance system. We sampled visits from four hospitals from July 2009 to June 2012 with diagnosis codes of 965.00, 965.01, 965.02 and 965.09 (poisoning by opiates and related narcotics) and/or an external cause of injury code of E850.0-E850.2 (accidental poisoning by opiates and related narcotics), and developed a novel case definition to determine in which cases opioid poisoning prompted the ED visit. We calculated the percentage of visits coded for opioid poisoning that were clinically significant and compared it to the percentage of visits coded for poisoning by non-opioid agents in which there was actually poisoning by an opioid agent. We created a multivariate regression model to determine if other collected triage data can improve the positive predictive value of diagnosis codes alone for detecting clinically significant opioid poisoning. 70.1 % of visits (Standard Error 2.4 %) coded for opioid poisoning were primarily prompted by opioid poisoning. The remainder of visits represented opioid exposure in the setting of other primary diseases. Among non-opioid poisoning codes reviewed, up to 36 % were reclassified as an opioid poisoning. In multivariate analysis, only naloxone use improved the positive predictive value of ICD-9-CM codes for identifying clinically significant opioid poisoning, but was associated with a high false negative rate. This surveillance mechanism identifies many clinically significant opioid overdoses with a high positive predictive value. With further validation, it may help target control measures such as prescriber education and pharmacy monitoring.

  18. Predictions of BuChE inhibitors using support vector machine and naive Bayesian classification techniques in drug discovery.

    PubMed

    Fang, Jiansong; Yang, Ranyao; Gao, Li; Zhou, Dan; Yang, Shengqian; Liu, Ai-Lin; Du, Guan-hua

    2013-11-25

    Butyrylcholinesterase (BuChE, EC 3.1.1.8) is an important pharmacological target for Alzheimer's disease (AD) treatment. However, the currently available BuChE inhibitor screening assays are expensive, labor-intensive, and compound-dependent. It is necessary to develop robust in silico methods to predict the activities of BuChE inhibitors for the lead identification. In this investigation, support vector machine (SVM) models and naive Bayesian models were built to discriminate BuChE inhibitors (BuChEIs) from the noninhibitors. Each molecule was initially represented in 1870 structural descriptors (1235 from ADRIANA.Code, 334 from MOE, and 301 from Discovery studio). Correlation analysis and stepwise variable selection method were applied to figure out activity-related descriptors for prediction models. Additionally, structural fingerprint descriptors were added to improve the predictive ability of models, which were measured by cross-validation, a test set validation with 1001 compounds and an external test set validation with 317 diverse chemicals. The best two models gave Matthews correlation coefficient of 0.9551 and 0.9550 for the test set and 0.9132 and 0.9221 for the external test set. To demonstrate the practical applicability of the models in virtual screening, we screened an in-house data set with 3601 compounds, and 30 compounds were selected for further bioactivity assay. The assay results showed that 10 out of 30 compounds exerted significant BuChE inhibitory activities with IC50 values ranging from 0.32 to 22.22 μM, at which three new scaffolds as BuChE inhibitors were identified for the first time. To our best knowledge, this is the first report on BuChE inhibitors using machine learning approaches. The models generated from SVM and naive Bayesian approaches successfully predicted BuChE inhibitors. The study proved the feasibility of a new method for predicting bioactivities of ligands and discovering novel lead compounds.

  19. Comprehensive analysis of differentially expressed profiles of lncRNAs and construction of miR-133b mediated ceRNA network in colorectal cancer.

    PubMed

    Wu, Hao; Wu, Runliu; Chen, Miao; Li, Daojiang; Dai, Jing; Zhang, Yi; Gao, Kai; Yu, Jun; Hu, Gui; Guo, Yihang; Lin, Changwei; Li, Xiaorong

    2017-03-28

    Growing evidence suggests that long non-coding RNAs (lncRNAs) play a key role in tumorigenesis. However, the mechanism remains largely unknown. Thousands of significantly dysregulated lncRNAs and mRNAs were identified by microarray. Furthermore, a miR-133b-meditated lncRNA-mRNA ceRNA network was revealed, a subset of which was validated in 14 paired CRC patient tumor/non-tumor samples. Gene set enrichment analysis (GSEA) results demonstrated that lncRNAs ENST00000520055 and ENST00000535511 shared KEGG pathways with miR-133b target genes. We used microarrays to survey the lncRNA and mRNA expression profiles of colorectal cancer and para-cancer tissues. Gene Ontology (GO) and KEGG pathway enrichment analyses were performed to explore the functions of the significantly dysregulated genes. An innovate method was employed that combined analyses of two microarray data sets to construct a miR-133b-mediated lncRNA-mRNA competing endogenous RNAs (ceRNA) network. Quantitative RT-PCR analysis was used to validate part of this network. GSEA was used to predict the potential functions of these lncRNAs. This study identifies and validates a new method to investigate the miR-133b-mediated lncRNA-mRNA ceRNA network and lays the foundation for future investigation into the role of lncRNAs in colorectal cancer.

  20. Development and validation of an epidemiologic case definition of epilepsy for use with routinely collected Australian health data.

    PubMed

    Tan, Michael; Wilson, Ian; Braganza, Vanessa; Ignatiadis, Sophia; Boston, Ray; Sundararajan, Vijaya; Cook, Mark J; D'Souza, Wendyl J

    2015-10-01

    We report the diagnostic validity of a selection algorithm for identifying epilepsy cases. Retrospective validation study of International Classification of Diseases 10th Revision Australian Modification (ICD-10AM)-coded hospital records and pharmaceutical data sampled from 300 consecutive potential epilepsy-coded cases and 300 randomly chosen cases without epilepsy from 3/7/2012 to 10/7/2013. Two epilepsy specialists independently validated the diagnosis of epilepsy. A multivariable logistic regression model was fitted to identify the optimum coding algorithm for epilepsy and was internally validated. One hundred fifty-eight out of three hundred (52.6%) epilepsy-coded records and 0/300 (0%) nonepilepsy records were confirmed to have epilepsy. The kappa for interrater agreement was 0.89 (95% CI=0.81-0.97). The model utilizing epilepsy (G40), status epilepticus (G41) and ≥1 antiepileptic drug (AED) conferred the highest positive predictive value of 81.4% (95% CI=73.1-87.9) and a specificity of 99.9% (95% CI=99.9-100.0). The area under the receiver operating curve was 0.90 (95% CI=0.88-0.93). When combined with pharmaceutical data, the precision of case identification for epilepsy data linkage design was considerably improved and could provide considerable potential for efficient and reasonably accurate case ascertainment in epidemiological studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. The impact of conventional dietary intake data coding methods on foods typically consumed by low-income African-American and White urban populations.

    PubMed

    Mason, Marc A; Fanelli Kuczmarski, Marie; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K

    2015-08-01

    Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Healthy Aging in Neighborhoods of Diversity across the Life Span study. African-American and White adults with two dietary recalls (n 2177). Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet-health relationships.

  2. Summary of EASM Turbulence Models in CFL3D With Validation Test Cases

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2003-01-01

    This paper summarizes the Explicit Algebraic Stress Model in k-omega form (EASM-ko) and in k-epsilon form (EASM-ke) in the Reynolds-averaged Navier-Stokes code CFL3D. These models have been actively used over the last several years in CFL3D, and have undergone some minor modifications during that time. Details of the equations and method for coding the latest versions of the models are given, and numerous validation cases are presented. This paper serves as a validation archive for these models.

  3. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Uranium Metal, Oxide, and Solution Systems on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; MacQuigg, Michael Robert; Wysong, Andrew Russell

    In this document, the code MCNP is validated with ENDF/B-VII.1 cross section data under the purview of ANSI/ANS-8.24-2007, for use with uranium systems. MCNP is a computer code based on Monte Carlo transport methods. While MCNP has wide reading capability in nuclear transport simulation, this validation is limited to the functionality related to neutron transport and calculation of criticality parameters such as k eff.

  4. A meta-model for computer executable dynamic clinical safety checklists.

    PubMed

    Nan, Shan; Van Gorp, Pieter; Lu, Xudong; Kaymak, Uzay; Korsten, Hendrikus; Vdovjak, Richard; Duan, Huilong

    2017-12-12

    Safety checklist is a type of cognitive tool enforcing short term memory of medical workers with the purpose of reducing medical errors caused by overlook and ignorance. To facilitate the daily use of safety checklists, computerized systems embedded in the clinical workflow and adapted to patient-context are increasingly developed. However, the current hard-coded approach of implementing checklists in these systems increase the cognitive efforts of clinical experts and coding efforts for informaticists. This is due to the lack of a formal representation format that is both understandable by clinical experts and executable by computer programs. We developed a dynamic checklist meta-model with a three-step approach. Dynamic checklist modeling requirements were extracted by performing a domain analysis. Then, existing modeling approaches and tools were investigated with the purpose of reusing these languages. Finally, the meta-model was developed by eliciting domain concepts and their hierarchies. The feasibility of using the meta-model was validated by two case studies. The meta-model was mapped to specific modeling languages according to the requirements of hospitals. Using the proposed meta-model, a comprehensive coronary artery bypass graft peri-operative checklist set and a percutaneous coronary intervention peri-operative checklist set have been developed in a Dutch hospital and a Chinese hospital, respectively. The result shows that it is feasible to use the meta-model to facilitate the modeling and execution of dynamic checklists. We proposed a novel meta-model for the dynamic checklist with the purpose of facilitating creating dynamic checklists. The meta-model is a framework of reusing existing modeling languages and tools to model dynamic checklists. The feasibility of using the meta-model is validated by implementing a use case in the system.

  5. Comparison of Vital Statistics Definitions of Suicide against a Coroner Reference Standard: A Population-Based Linkage Study.

    PubMed

    Gatov, Evgenia; Kurdyak, Paul; Sinyor, Mark; Holder, Laura; Schaffer, Ayal

    2018-03-01

    We sought to determine the utility of health administrative databases for population-based suicide surveillance, as these data are generally more accessible and more integrated with other data sources compared to coroners' records. In this retrospective validation study, we identified all coroner-confirmed suicides between 2003 and 2012 in Ontario residents aged 21 and over and linked this information to Statistics Canada's vital statistics data set. We examined the overlap between the underlying cause of death field and secondary causes of death using ICD-9 and ICD-10 codes for deliberate self-harm (i.e., suicide) and examined the sociodemographic and clinical characteristics of misclassified records. Among 10,153 linked deaths, there was a very high degree of overlap between records coded as deliberate self-harm in the vital statistics data set and coroner-confirmed suicides using both ICD-9 and ICD-10 definitions (96.88% and 96.84% sensitivity, respectively). This alignment steadily increased throughout the study period (from 95.9% to 98.8%). Other vital statistics diagnoses in primary fields included uncategorised signs and symptoms. Vital statistics records that were misclassified did not differ from valid records in terms of sociodemographic characteristics but were more likely to have had an unspecified place of injury on the death certificate ( P < 0.001), more likely to have died at a health care facility ( P < 0.001), to have had an autopsy ( P = 0.002), and to have been admitted to a psychiatric hospital in the year preceding death ( P = 0.03). A high degree of concordance between vital statistics and coroner classification of suicide deaths suggests that health administrative data can reliably be used to identify suicide deaths.

  6. The Diaper Change Play: Validation of a New Observational Assessment Tool for Early Triadic Family Interactions in the First Month Postpartum.

    PubMed

    Rime, Jérôme; Tissot, Hervé; Favez, Nicolas; Watson, Michael; Stadlmayr, Werner

    2018-01-01

    The quality of family relations, observed during mother-father-infant triadic interactions, has been shown to be an important contributor to child social and affective development, beyond the quality of dyadic mother-child, father-child, and marital relationships. Triadic interactions have been well described in families with 3 month olds and older children using the Lausanne Trilogue Play (LTP). Little is known about the development of mother-father-baby interactions in the very 1st weeks postpartum, mostly because no specific observational setting or particular instrument had been designed to cover this age yet. To fill this gap, we adapted the LTP to create a new observational setting, namely the Diaper Change Play (DCP). Interactions are assessed using the Family Alliance Assessment Scales for DCP (FAAS-DCP). We present the validation of the DCP and its coding system, the FAAS-DCP. The three validation studies presented here (44 mother-father-child-triads) involve a sample of parents with 3-week-old infants recruited in two maternity wards ( n = 32 and n = 12) in Switzerland. Infants from both sites were all healthy according to their APGAR scores, weight at birth, and scores on the NICU Network Neurobehavioral Scale (NNNS), which was additionally conducted on the twelve infants recruited in one of the maternity ward. Results showed that the "FAAS - DCP" coding system has good psychometric properties, with a good internal consistency and a satisfying reliability among the three independent raters. Finally, the "FAAS-DCP" scores on the interactive dimensions are comparable to the similar dimensions in the FAAS-LTP. The results showed that there is no statistically significant difference on scores between the "FAAS-DCP" and the "FAAS," which is consistent with previous studies underlying stability in triadic interaction patterns from pregnancy to 18 months. These first results indicated that the DCP is a promising observational setting, able to assess the development of the early family triadic functioning. The DCP and the FAAS-DCP offer to both clinicians and researchers a way to improve the understanding of the establishment of early family functioning as well as to study the young infant's triangular capacity. Perspectives for future research will be discussed.

  7. The Diaper Change Play: Validation of a New Observational Assessment Tool for Early Triadic Family Interactions in the First Month Postpartum

    PubMed Central

    Rime, Jérôme; Tissot, Hervé; Favez, Nicolas; Watson, Michael; Stadlmayr, Werner

    2018-01-01

    The quality of family relations, observed during mother–father–infant triadic interactions, has been shown to be an important contributor to child social and affective development, beyond the quality of dyadic mother–child, father–child, and marital relationships. Triadic interactions have been well described in families with 3 month olds and older children using the Lausanne Trilogue Play (LTP). Little is known about the development of mother–father–baby interactions in the very 1st weeks postpartum, mostly because no specific observational setting or particular instrument had been designed to cover this age yet. To fill this gap, we adapted the LTP to create a new observational setting, namely the Diaper Change Play (DCP). Interactions are assessed using the Family Alliance Assessment Scales for DCP (FAAS-DCP). We present the validation of the DCP and its coding system, the FAAS-DCP. The three validation studies presented here (44 mother–father–child–triads) involve a sample of parents with 3-week-old infants recruited in two maternity wards (n = 32 and n = 12) in Switzerland. Infants from both sites were all healthy according to their APGAR scores, weight at birth, and scores on the NICU Network Neurobehavioral Scale (NNNS), which was additionally conducted on the twelve infants recruited in one of the maternity ward. Results showed that the “FAAS – DCP” coding system has good psychometric properties, with a good internal consistency and a satisfying reliability among the three independent raters. Finally, the “FAAS-DCP” scores on the interactive dimensions are comparable to the similar dimensions in the FAAS-LTP. The results showed that there is no statistically significant difference on scores between the “FAAS-DCP” and the “FAAS,” which is consistent with previous studies underlying stability in triadic interaction patterns from pregnancy to 18 months. These first results indicated that the DCP is a promising observational setting, able to assess the development of the early family triadic functioning. The DCP and the FAAS-DCP offer to both clinicians and researchers a way to improve the understanding of the establishment of early family functioning as well as to study the young infant’s triangular capacity. Perspectives for future research will be discussed. PMID:29706912

  8. Descriptive analysis of the verbal behavior of a therapist: a known-group validity analysis of the putative behavioral functions involved in clinical interaction.

    PubMed

    Virues-Ortega, Javier; Montaño-Fidalgo, Montserrat; Froján-Parga, María Xesús; Calero-Elvira, Ana

    2011-12-01

    This study analyzes the interobserver agreement and hypothesis-based known-group validity of the Therapist's Verbal Behavior Category System (SISC-INTER). The SISC-INTER is a behavioral observation protocol comprised of a set of verbal categories representing putative behavioral functions of the in-session verbal behavior of a therapist (e.g., discriminative, reinforcing, punishing, and motivational operations). The complete therapeutic process of a clinical case of an individual with marital problems was recorded (10 sessions, 8 hours), and data were arranged in a temporal sequence using 10-min periods. Hypotheses based on the expected performance of the putative behavioral functions portrayed by the SISC-INTER codes across prevalent clinical activities (i.e., assessing, explaining, Socratic method, providing clinical guidance) were tested using autoregressive integrated moving average (ARIMA) models. Known-group validity analyses provided support to all hypotheses. The SISC-INTER may be a useful tool to describe therapist-client interaction in operant terms. The utility of reliable and valid protocols for the descriptive analysis of clinical practice in terms of verbal behavior is discussed. Copyright © 2011. Published by Elsevier Ltd.

  9. Hypersonic merged layer blunt body flows with wakes

    NASA Technical Reports Server (NTRS)

    Jain, Amolak C.; Dahm, Werner K.

    1991-01-01

    An attempt is made here to understand the basic physics of the flowfield with wake on a blunt body of revolution under hypersonic rarefied conditions. A merged layer model of flow is envisioned. Full steady-state Navier-Stokes equations in spherical polar coordinate system are computed from the surface with slip and temperature jump conditions to the free stream by the Accelerated Successive Replacement method of numerical integration. Analysis is developed for bodies of arbitrary shape, but actual computations have been carried out for a sphere and sphere-cone body. Particular attention is paid to set the limit of the onset of separation, wake closure, shear-layer impingement, formation and dissipation of the shocks in the flowfield. Validity of the results is established by comparing the present results for sphere with the corresponding results of the SOFIA code in the common region of their validity and with the experimental data.

  10. Observation of early childhood physical aggression: a psychometric study of the system for coding early physical aggression.

    PubMed

    Mesman, Judi; Alink, Lenneke R A; van Zeijl, Jantien; Stolk, Mirjam N; Bakermans-Kranenburg, Marian J; van Ijzendoorn, Marinus H; Juffer, Femmie; Koot, Hans M

    2008-01-01

    We investigated the reliability and (convergent and discriminant) validity of an observational measure of physical aggression in toddlers and preschoolers, originally developed by Keenan and Shaw [1994]. The observation instrument is based on a developmental definition of aggression. Physical aggression was observed twice in a laboratory setting, the first time when children were 1-3 years old, and again 1 year later. Observed physical aggression was significantly related to concurrent mother-rated physical aggression for 2- to 4-year-olds, but not to maternal ratings of nonaggressive externalizing problems, indicating the measure's discriminant validity. However, we did not find significant 1-year stability of observed physical aggression in any of the age groups, whereas mother-rated physical aggression was significantly stable for all ages. The observational measure shows promise, but may have assessed state rather than trait aggression in our study. Copyright 2008 Wiley-Liss, Inc.

  11. Multiaxial Creep-Fatigue and Creep-Ratcheting Failures of Grade 91 and Haynes 230 Alloys Toward Addressing Design Issues of Gen IV Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Tasnim; Lissenden, Cliff; Carroll, Laura

    The proposed research will develop systematic sets of uniaxial and multiaxial experimental data at a very high temperature (850-950°C) for Alloy 617. The loading histories to be prescribed in the experiments will induce creep-fatigue and creep-ratcheting failure mechanisms. These experimental responses will be scrutinized in order to quantify the influences of temperature and creep on fatigue and ratcheting failures. A unified constitutive model (UCM) will be developed and validated against these experimental responses. The improved UCM will be incorporated into the widely used finite element commercial software packages ANSYS. The modified ANSYS will be validated so that it can bemore » used for evaluating the very high temperature ASME-NH design-by-analysis methodology for Alloy 617 and thereby addressing the ASME-NH design code issues.« less

  12. A Fast Healthcare Interoperability Resources (FHIR) layer implemented over i2b2.

    PubMed

    Boussadi, Abdelali; Zapletal, Eric

    2017-08-14

    Standards and technical specifications have been developed to define how the information contained in Electronic Health Records (EHRs) should be structured, semantically described, and communicated. Current trends rely on differentiating the representation of data instances from the definition of clinical information models. The dual model approach, which combines a reference model (RM) and a clinical information model (CIM), sets in practice this software design pattern. The most recent initiative, proposed by HL7, is called Fast Health Interoperability Resources (FHIR). The aim of our study was to investigate the feasibility of applying the FHIR standard to modeling and exposing EHR data of the Georges Pompidou European Hospital (HEGP) integrating biology and the bedside (i2b2) clinical data warehouse (CDW). We implemented a FHIR server over i2b2 to expose EHR data in relation with five FHIR resources: DiagnosisReport, MedicationOrder, Patient, Encounter, and Medication. The architecture of the server combines a Data Access Object design pattern and FHIR resource providers, implemented using the Java HAPI FHIR API. Two types of queries were tested: query type #1 requests the server to display DiagnosticReport resources, for which the diagnosis code is equal to a given ICD-10 code. A total of 80 DiagnosticReport resources, corresponding to 36 patients, were displayed. Query type #2, requests the server to display MedicationOrder, for which the FHIR Medication identification code is equal to a given code expressed in a French coding system. A total of 503 MedicationOrder resources, corresponding to 290 patients, were displayed. Results were validated by manually comparing the results of each request to the results displayed by an ad-hoc SQL query. We showed the feasibility of implementing a Java layer over the i2b2 database model to expose data of the CDW as a set of FHIR resources. An important part of this work was the structural and semantic mapping between the i2b2 model and the FHIR RM. To accomplish this, developers must manually browse the specifications of the FHIR standard. Our source code is freely available and can be adapted for use in other i2b2 sites.

  13. Predictions of GPS X-Set Performance during the Places Experiment

    DTIC Science & Technology

    1979-07-01

    previously existing GPS X-set receiver simulation was modified to include the received signal spectrum and the receiver code correlation operation... CORRELATION OPERATION The X-set receiver simulation documented in Reference 3-1 is a direct sampled -data digital implementation of the GPS X-set...ul(t) -sin w2t From Carrier and Code Loops (wit +0 1 (t)) Figure 3-6. Simplified block diagram of code correlator operation and I-Q sampling . 6 I

  14. RAZORBACK - A Research Reactor Transient Analysis Code Version 1.0 - Volume 3: Verification and Validation Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talley, Darren G.

    2017-04-01

    This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less

  15. Computational Modeling and Validation for Hypersonic Inlets

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1996-01-01

    Hypersonic inlet research activity at NASA is reviewed. The basis for the paper is the experimental tests performed with three inlets: the NASA Lewis Research Center Mach 5, the McDonnell Douglas Mach 12, and the NASA Langley Mach 18. Both three-dimensional PNS and NS codes have been used to compute the flow within the three inlets. Modeling assumptions in the codes involve the turbulence model, the nature of the boundary layer, shock wave-boundary layer interaction, and the flow spilled to the outside of the inlet. Use of the codes and the experimental data are helping to develop a clearer understanding of the inlet flow physics and to focus on the modeling improvements required in order to arrive at validated codes.

  16. Reliability and Validity of the Dyadic Observed Communication Scale (DOCS).

    PubMed

    Hadley, Wendy; Stewart, Angela; Hunter, Heather L; Affleck, Katelyn; Donenberg, Geri; Diclemente, Ralph; Brown, Larry K

    2013-02-01

    We evaluated the reliability and validity of the Dyadic Observed Communication Scale (DOCS) coding scheme, which was developed to capture a range of communication components between parents and adolescents. Adolescents and their caregivers were recruited from mental health facilities for participation in a large, multi-site family-based HIV prevention intervention study. Seventy-one dyads were randomly selected from the larger study sample and coded using the DOCS at baseline. Preliminary validity and reliability of the DOCS was examined using various methods, such as comparing results to self-report measures and examining interrater reliability. Results suggest that the DOCS is a reliable and valid measure of observed communication among parent-adolescent dyads that captures both verbal and nonverbal communication behaviors that are typical intervention targets. The DOCS is a viable coding scheme for use by researchers and clinicians examining parent-adolescent communication. Coders can be trained to reliably capture individual and dyadic components of communication for parents and adolescents and this complex information can be obtained relatively quickly.

  17. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Marshall, William BJ J

    In the course of criticality code validation, outlier cases are frequently encountered. Historically, the causes of these unexpected results could be diagnosed only through comparison with other similar cases or through the known presence of a unique component of the critical experiment. The sensitivity and uncertainty (S/U) analysis tools available in the SCALE 6.1 code system provide a much broader range of options to examine underlying causes of outlier cases. This paper presents some case studies performed as a part of the recent validation of the KENO codes in SCALE 6.1 using S/U tools to examine potential causes of biases.

  19. The How Project: understanding contextual challenges to global surgical care provision in low-resource settings

    PubMed Central

    Raykar, Nakul P; Yorlets, Rachel R; Liu, Charles; Goldman, Roberta; Greenberg, Sarah L M; Kotagal, Meera; Farmer, Paul E; Meara, John G; Roy, Nobhojit; Gillies, Rowan D

    2016-01-01

    Introduction 5 billion people around the world do not have access to safe, affordable, timely surgical care. This series of qualitative interviews was launched by The Lancet Commission on Global Surgery (LCoGS) with the aim of understanding the contextual challenges—the specific circumstances—faced by surgical care providers in low-resource settings who care for impoverished patients, and how those providers overcome these challenges. Methods From January 2014 to February 2015, 20 LCoGS collaborators conducted semistructured interviews with 148 surgical providers in low-resource settings in 21 countries. Stratified purposive sampling was used to include both rural and urban providers, and reputational case selection identified individuals. Interviewers were trained with an implementation manual. Following immersion into de-identified texts from completed interviews, topical coding and further analysis of coded texts was completed by an independent analyst with periodic validation from a second analyst. Results Providers described substantial financial, geographic and cultural barriers to patient access. Rural surgical teams reported a lack of a trained workforce and insufficient infrastructure, equipment, supplies and banked blood. Urban providers face overcrowding, exacerbated by minimal clinical and administrative support, and limited interhospital care coordination. Many providers across contexts identified national health policies that do not reflect the realities of resource-poor settings. Some findings were region-specific, such as weak patient–provider relationships and unreliable supply chains. In all settings, surgical teams have created workarounds to deliver care despite the challenges. Discussion While some differences exist between countries, the barriers to safe surgery and anaesthesia are overall consistent and resource-dependent. Efforts to advance and expand global surgery must address these commonalities, while local policymakers can tailor responses to key contextual differences. PMID:28588976

  20. The How Project: understanding contextual challenges to global surgical care provision in low-resource settings.

    PubMed

    Raykar, Nakul P; Yorlets, Rachel R; Liu, Charles; Goldman, Roberta; Greenberg, Sarah L M; Kotagal, Meera; Farmer, Paul E; Meara, John G; Roy, Nobhojit; Gillies, Rowan D

    2016-01-01

    5 billion people around the world do not have access to safe, affordable, timely surgical care. This series of qualitative interviews was launched by The Lancet Commission on Global Surgery (LCoGS) with the aim of understanding the contextual challenges-the specific circumstances-faced by surgical care providers in low-resource settings who care for impoverished patients, and how those providers overcome these challenges. From January 2014 to February 2015, 20 LCoGS collaborators conducted semistructured interviews with 148 surgical providers in low-resource settings in 21 countries. Stratified purposive sampling was used to include both rural and urban providers, and reputational case selection identified individuals. Interviewers were trained with an implementation manual. Following immersion into de-identified texts from completed interviews, topical coding and further analysis of coded texts was completed by an independent analyst with periodic validation from a second analyst. Providers described substantial financial, geographic and cultural barriers to patient access. Rural surgical teams reported a lack of a trained workforce and insufficient infrastructure, equipment, supplies and banked blood. Urban providers face overcrowding, exacerbated by minimal clinical and administrative support, and limited interhospital care coordination. Many providers across contexts identified national health policies that do not reflect the realities of resource-poor settings. Some findings were region-specific, such as weak patient-provider relationships and unreliable supply chains. In all settings, surgical teams have created workarounds to deliver care despite the challenges. While some differences exist between countries, the barriers to safe surgery and anaesthesia are overall consistent and resource-dependent. Efforts to advance and expand global surgery must address these commonalities, while local policymakers can tailor responses to key contextual differences.

  1. Validity of administrative coding in identifying patients with upper urinary tract calculi.

    PubMed

    Semins, Michelle J; Trock, Bruce J; Matlaga, Brian R

    2010-07-01

    Administrative databases are increasingly used for epidemiological investigations. We performed a study to assess the validity of ICD-9 codes for upper urinary tract stone disease in an administrative database. We retrieved the records of all inpatients and outpatients at Johns Hopkins Hospital between November 2007 and October 2008 with an ICD-9 code of 592, 592.0, 592.1 or 592.9 as one of the first 3 diagnosis codes. A random number generator selected 100 encounters for further review. We considered a patient to have a true diagnosis of an upper tract stone if the medical records specifically referenced a kidney stone event, or included current or past treatment for a kidney stone. Descriptive and comparative analyses were performed. A total of 8,245 encounters coded as upper tract calculus were identified and 100 were randomly selected for review. Two patients could not be identified within the electronic medical record and were excluded from the study. The positive predictive value of using all ICD-9 codes for an upper tract calculus (592, 592.0, 592.1) to identify subjects with renal or ureteral stones was 95.9%. For 592.0 only the positive predictive value was 85%. However, although the positive predictive value for 592.1 only was 100%, 26 subjects (76%) with a ureteral stone were not appropriately billed with this code. ICD-9 coding for urinary calculi is likely to be sufficiently valid to be useful in studies using administrative data to analyze stone disease. However, ICD-9 coding is not a reliable means to distinguish between subjects with renal and ureteral calculi. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  2. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1991-01-01

    Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.

  3. Validation of coding algorithms for the identification of patients hospitalized for alcoholic hepatitis using administrative data.

    PubMed

    Pang, Jack X Q; Ross, Erin; Borman, Meredith A; Zimmer, Scott; Kaplan, Gilaad G; Heitman, Steven J; Swain, Mark G; Burak, Kelly W; Quan, Hude; Myers, Robert P

    2015-09-11

    Epidemiologic studies of alcoholic hepatitis (AH) have been hindered by the lack of a validated International Classification of Disease (ICD) coding algorithm for use with administrative data. Our objective was to validate coding algorithms for AH using a hospitalization database. The Hospital Discharge Abstract Database (DAD) was used to identify consecutive adults (≥18 years) hospitalized in the Calgary region with a diagnosis code for AH (ICD-10, K70.1) between 01/2008 and 08/2012. Medical records were reviewed to confirm the diagnosis of AH, defined as a history of heavy alcohol consumption, elevated AST and/or ALT (<300 U/L), serum bilirubin >34 μmol/L, and elevated INR. Subgroup analyses were performed according to the diagnosis field in which the code was recorded (primary vs. secondary) and AH severity. Algorithms that incorporated ICD-10 codes for cirrhosis and its complications were also examined. Of 228 potential AH cases, 122 patients had confirmed AH, corresponding to a positive predictive value (PPV) of 54% (95% CI 47-60%). PPV improved when AH was the primary versus a secondary diagnosis (67% vs. 21%; P < 0.001). Algorithms that included diagnosis codes for ascites (PPV 75%; 95% CI 63-86%), cirrhosis (PPV 60%; 47-73%), and gastrointestinal hemorrhage (PPV 62%; 51-73%) had improved performance, however, the prevalence of these diagnoses in confirmed AH cases was low (29-39%). In conclusion the low PPV of the diagnosis code for AH suggests that caution is necessary if this hospitalization database is used in large-scale epidemiologic studies of this condition.

  4. FY2017 Pilot Project Plan for the Nuclear Energy Knowledge and Validation Center Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Weiju

    To prepare for technical development of computational code validation under the Nuclear Energy Knowledge and Validation Center (NEKVAC) initiative, several meetings were held by a group of experts of the Idaho National Laboratory (INL) and the Oak Ridge National Laboratory (ORNL) to develop requirements of, and formulate a structure for, a transient fuel database through leveraging existing resources. It was concluded in discussions of these meetings that a pilot project is needed to address the most fundamental issues that can generate immediate stimulus to near-future validation developments as well as long-lasting benefits to NEKVAC operation. The present project is proposedmore » based on the consensus of these discussions. Analysis of common scenarios in code validation indicates that the incapability of acquiring satisfactory validation data is often a showstopper that must first be tackled before any confident validation developments can be carried out. Validation data are usually found scattered in different places most likely with interrelationships among the data not well documented, incomplete with information for some parameters missing, nonexistent, or unrealistic to experimentally generate. Furthermore, with very different technical backgrounds, the modeler, the experimentalist, and the knowledgebase developer that must be involved in validation data development often cannot communicate effectively without a data package template that is representative of the data structure for the information domain of interest to the desired code validation. This pilot project is proposed to use the legendary TREAT Experiments Database to provide core elements for creating an ideal validation data package. Data gaps and missing data interrelationships will be identified from these core elements. All the identified missing elements will then be filled in with experimental data if available from other existing sources or with dummy data if nonexistent. The resulting hybrid validation data package (composed of experimental and dummy data) will provide a clear and complete instance delineating the structure of the desired validation data and enabling effective communication among the modeler, the experimentalist, and the knowledgebase developer. With a good common understanding of the desired data structure by the three parties of subject matter experts, further existing data hunting will be effectively conducted, new experimental data generation will be realistically pursued, knowledgebase schema will be practically designed; and code validation will be confidently planned.« less

  5. Newt-omics: a comprehensive repository for omics data from the newt Notophthalmus viridescens

    PubMed Central

    Bruckskotten, Marc; Looso, Mario; Reinhardt, Richard; Braun, Thomas; Borchardt, Thilo

    2012-01-01

    Notophthalmus viridescens, a member of the salamander family is an excellent model organism to study regenerative processes due to its unique ability to replace lost appendages and to repair internal organs. Molecular insights into regenerative events have been severely hampered by the lack of genomic, transcriptomic and proteomic data, as well as an appropriate database to store such novel information. Here, we describe ‘Newt-omics’ (http://newt-omics.mpi-bn.mpg.de), a database, which enables researchers to locate, retrieve and store data sets dedicated to the molecular characterization of newts. Newt-omics is a transcript-centred database, based on an Expressed Sequence Tag (EST) data set from the newt, covering ∼50 000 Sanger sequenced transcripts and a set of high-density microarray data, generated from regenerating hearts. Newt-omics also contains a large set of peptides identified by mass spectrometry, which was used to validate 13 810 ESTs as true protein coding. Newt-omics is open to implement additional high-throughput data sets without changing the database structure. Via a user-friendly interface Newt-omics allows access to a huge set of molecular data without the need for prior bioinformatical expertise. PMID:22039101

  6. Reliability of a rating procedure to monitor industry self-regulation codes governing alcohol advertising content.

    PubMed

    Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne

    2008-03-01

    The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.

  7. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry

    DTIC Science & Technology

    2014-05-29

    its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the

  8. The Physician Recommendation Coding System (PhyReCS): A Reliable and Valid Method to Quantify the Strength of Physician Recommendations During Clinical Encounters

    PubMed Central

    Scherr, Karen A.; Fagerlin, Angela; Williamson, Lillie D.; Davis, J. Kelly; Fridman, Ilona; Atyeo, Natalie; Ubel, Peter A.

    2016-01-01

    Background Physicians’ recommendations affect patients’ treatment choices. However, most research relies on physicians’ or patients’ retrospective reports of recommendations, which offer a limited perspective and have limitations such as recall bias. Objective To develop a reliable and valid method to measure the strength of physician recommendations using direct observation of clinical encounters. Methods Clinical encounters (n = 257) were recorded as part of a larger study of prostate cancer decision making. We used an iterative process to create the 5-point Physician Recommendation Coding System (PhyReCS). To determine reliability, research assistants double-coded 50 transcripts. To establish content validity, we used one-way ANOVAs to determine whether relative treatment recommendation scores differed as a function of which treatment patients received. To establish concurrent validity, we examined whether patients’ perceived treatment recommendations matched our coded recommendations. Results The PhyReCS was highly reliable (Krippendorf’s alpha =. 89, 95% CI [.86, .91]). The average relative treatment recommendation score for each treatment was higher for individuals who received that particular treatment. For example, the average relative surgery recommendation score was higher for individuals who received surgery versus radiation (mean difference = .98, SE = .18, p < .001) or active surveillance (mean difference = 1.10, SE = .14, p < .001). Patients’ perceived recommendations matched coded recommendations 81% of the time. Conclusion The PhyReCS is a reliable and valid way to capture the strength of physician recommendations. We believe that the PhyReCS would be helpful for other researchers who wish to study physician recommendations, an important part of patient decision making. PMID:27343015

  9. The Physician Recommendation Coding System (PhyReCS): A Reliable and Valid Method to Quantify the Strength of Physician Recommendations During Clinical Encounters.

    PubMed

    Scherr, Karen A; Fagerlin, Angela; Williamson, Lillie D; Davis, J Kelly; Fridman, Ilona; Atyeo, Natalie; Ubel, Peter A

    2017-01-01

    Physicians' recommendations affect patients' treatment choices. However, most research relies on physicians' or patients' retrospective reports of recommendations, which offer a limited perspective and have limitations such as recall bias. To develop a reliable and valid method to measure the strength of physician recommendations using direct observation of clinical encounters. Clinical encounters (n = 257) were recorded as part of a larger study of prostate cancer decision making. We used an iterative process to create the 5-point Physician Recommendation Coding System (PhyReCS). To determine reliability, research assistants double-coded 50 transcripts. To establish content validity, we used 1-way analyses of variance to determine whether relative treatment recommendation scores differed as a function of which treatment patients received. To establish concurrent validity, we examined whether patients' perceived treatment recommendations matched our coded recommendations. The PhyReCS was highly reliable (Krippendorf's alpha = 0.89, 95% CI [0.86, 0.91]). The average relative treatment recommendation score for each treatment was higher for individuals who received that particular treatment. For example, the average relative surgery recommendation score was higher for individuals who received surgery versus radiation (mean difference = 0.98, SE = 0.18, P < 0.001) or active surveillance (mean difference = 1.10, SE = 0.14, P < 0.001). Patients' perceived recommendations matched coded recommendations 81% of the time. The PhyReCS is a reliable and valid way to capture the strength of physician recommendations. We believe that the PhyReCS would be helpful for other researchers who wish to study physician recommendations, an important part of patient decision making. © The Author(s) 2016.

  10. Improving Public Reporting and Data Validation for Complex Surgical Site Infections After Coronary Artery Bypass Graft Surgery and Hip Arthroplasty

    PubMed Central

    Calderwood, Michael S.; Kleinman, Ken; Murphy, Michael V.; Platt, Richard; Huang, Susan S.

    2014-01-01

    Background  Deep and organ/space surgical site infections (D/OS SSI) cause significant morbidity, mortality, and costs. Rates are publicly reported and increasingly used as quality metrics affecting hospital payment. Lack of standardized surveillance methods threaten the accuracy of reported data and decrease confidence in comparisons based upon these data. Methods  We analyzed data from national validation studies that used Medicare claims to trigger chart review for SSI confirmation after coronary artery bypass graft surgery (CABG) and hip arthroplasty. We evaluated code performance (sensitivity and positive predictive value) to select diagnosis codes that best identified D/OS SSI. Codes were analyzed individually and in combination. Results  Analysis included 143 patients with D/OS SSI after CABG and 175 patients with D/OS SSI after hip arthroplasty. For CABG, 9 International Classification of Diseases, 9th Revision (ICD-9) diagnosis codes identified 92% of D/OS SSI, with 1 D/OS SSI identified for every 4 cases with a diagnosis code. For hip arthroplasty, 6 ICD-9 diagnosis codes identified 99% of D/OS SSI, with 1 D/OS SSI identified for every 2 cases with a diagnosis code. Conclusions  This standardized and efficient approach for identifying D/OS SSI can be used by hospitals to improve case detection and public reporting. This method can also be used to identify potential D/OS SSI cases for review during hospital audits for data validation. PMID:25734174

  11. Natural language processing of clinical notes for identification of critical limb ischemia.

    PubMed

    Afzal, Naveed; Mallipeddi, Vishnu Priya; Sohn, Sunghwan; Liu, Hongfang; Chaudhry, Rajeev; Scott, Christopher G; Kullo, Iftikhar J; Arruda-Olson, Adelaide M

    2018-03-01

    Critical limb ischemia (CLI) is a complication of advanced peripheral artery disease (PAD) with diagnosis based on the presence of clinical signs and symptoms. However, automated identification of cases from electronic health records (EHRs) is challenging due to absence of a single definitive International Classification of Diseases (ICD-9 or ICD-10) code for CLI. In this study, we extend a previously validated natural language processing (NLP) algorithm for PAD identification to develop and validate a subphenotyping NLP algorithm (CLI-NLP) for identification of CLI cases from clinical notes. We compared performance of the CLI-NLP algorithm with CLI-related ICD-9 billing codes. The gold standard for validation was human abstraction of clinical notes from EHRs. Compared to billing codes the CLI-NLP algorithm had higher positive predictive value (PPV) (CLI-NLP 96%, billing codes 67%, p < 0.001), specificity (CLI-NLP 98%, billing codes 74%, p < 0.001) and F1-score (CLI-NLP 90%, billing codes 76%, p < 0.001). The sensitivity of these two methods was similar (CLI-NLP 84%; billing codes 88%; p < 0.12). The CLI-NLP algorithm for identification of CLI from narrative clinical notes in an EHR had excellent PPV and has potential for translation to patient care as it will enable automated identification of CLI cases for quality projects, clinical decision support tools and support a learning healthcare system. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. A validation of well-being and happiness surveys for administration via the Internet.

    PubMed

    Howell, Ryan T; Rodzon, Katrina S; Kurai, Mark; Sanchez, Amy H

    2010-08-01

    Internet research is appealing because it is a cost- and time-efficient way to access a large number of participants; however, the validity of Internet research for important subjective well-being (SWB) surveys has not been adequately assessed. The goal of the present study was to validate the Satisfaction With Life Scale (SWLS; Diener, Emmons, Larsen, & Griffin, 1985), the Positive and Negative Affect Schedule (PANAS-X; Watson & Clark, 1994), and the Subjective Happiness Scale (SHS; Lyubomirsky & Lepper, 1999) for use on the Internet. This study compared the quality of data collected using paper-based (paper-and-pencil version in a lab setting), computer-based (Web-based version in a lab setting), and Internet (Web-based version on a computer of the participant's choosing) surveys for these three measures of SWB. The paper-based and computer-based experiment recruited two college student samples; the Internet experiments recruited a college student sample and an adult sample responding to ads on different social-networking Web sites. This study provides support for the reliability, validity, and generalizability of the Internet format of the SWLS, PANAS-X, and SHS. Across the three experiments, the results indicate that the computer-based and Internet surveys had means, standard deviations, reliabilities, and factor structures that were similar to those of the paper-based versions. The discussion examines the difficulty of higher attrition for the Internet version, the need to examine reverse-coded items in the future, and the possibility that unhappy individuals are more likely to participate in Internet surveys of SWB.

  13. VLF Trimpi modelling on the path NWC-Dunedin using both finite element and 3D Born modelling

    NASA Astrophysics Data System (ADS)

    Nunn, D.; Hayakawa, K. B. M.

    1998-10-01

    This paper investigates the numerical modelling of VLF Trimpis, produced by a D region inhomogeneity on the great circle path. Two different codes are used to model Trimpis on the path NWC-Dunedin. The first is a 2D Finite Element Method Code (FEM), whose solutions are rigorous and valid in the strong scattering or non-Born limit. The second code is a 3D model that invokes the Born approximation. The predicted Trimpis from these codes compare very closely, thus confirming the validity of both models. The modal scattering matrices for both codes are analysed in some detail and are found to have a comparable structure. They indicate strong scattering between the dominant TM modes. Analysis of the scattering matrix from the FEM code shows that departure from linear Born behaviour occurs when the inhomogeneity has a horizontal scale size of about 100 km and a maximum electron density enhancement at 75 km altitude of about 6 electrons.

  14. A measure of short-term visual memory based on the WISC-R coding subtest.

    PubMed

    Collaer, M L; Evans, J R

    1982-07-01

    Adapted the Coding subtest of the WISC-R to provide a measure of visual memory. Three hundred and five children, aged 8 through 12, were administered the Coding test using standard directions. A few seconds after completion the key was taken away, and each was given a paper with only the digits and asked to write the appropriate matching symbol below each. This was termed "Coding Recall." To provide validity data, a subgroup of 50 Ss also was administered the Attention Span for Letters subtest from the Detroit Tests of Learning Aptitude (as a test of visual memory for sequences of letters) and a Bender Gestalt recall test (as a measure of visual memory for geometric forms). Coding Recall means and standard deviations are reported separately by sex and age level. Implications for clinicans are discussed. Reservations about clinical use of the data are given in view of the possible lack of representativeness of the sample used and the limited reliability and validity of Coding Recall.

  15. Exploring Space Management Goals in Institutional Care Facilities in China

    PubMed Central

    Zhang, Jiankun

    2017-01-01

    Space management has been widely examined in commercial facilities, educational facilities, and hospitals but not in China's institutional care facilities. Poor spatial arrangements, such as wasted space, dysfunctionality, and environment mismanagement, are increasing; in turn, the occupancy rate is decreasing due to residential dissatisfaction. To address these problems, this paper's objective is to explore the space management goals (SMGs) in institutional care facilities in China. Systematic literature analysis was adopted to set SMGs' principles, to identify nine theoretical SMGs, and to develop the conceptual model of SMGs for institutional care facilities. A total of 19 intensive interviews were conducted with stakeholders in seven institutional care facilities to collect data for qualitative analysis. The qualitative evidence was analyzed through open coding, axial coding, and selective coding. As a result, six major categories as well as their interrelationships were put forward to visualize the path diagram for exploring SMGs in China's institutional care facilities. Furthermore, seven expected SMGs that were explored from qualitative evidence were confirmed as China's SMGs in institutional care facilities by a validation test. Finally, a gap analysis among theoretical SMGs and China's SMGs provided recommendations for implementing space management in China's institutional care facilities. PMID:29065629

  16. DLRS: gene tree evolution in light of a species tree.

    PubMed

    Sjöstrand, Joel; Sennblad, Bengt; Arvestad, Lars; Lagergren, Jens

    2012-11-15

    PrIME-DLRS (or colloquially: 'Delirious') is a phylogenetic software tool to simultaneously infer and reconcile a gene tree given a species tree. It accounts for duplication and loss events, a relaxed molecular clock and is intended for the study of homologous gene families, for example in a comparative genomics setting involving multiple species. PrIME-DLRS uses a Bayesian MCMC framework, where the input is a known species tree with divergence times and a multiple sequence alignment, and the output is a posterior distribution over gene trees and model parameters. PrIME-DLRS is available for Java SE 6+ under the New BSD License, and JAR files and source code can be downloaded from http://code.google.com/p/jprime/. There is also a slightly older C++ version available as a binary package for Ubuntu, with download instructions at http://prime.sbc.su.se. The C++ source code is available upon request. joel.sjostrand@scilifelab.se or jens.lagergren@scilifelab.se. PrIME-DLRS is based on a sound probabilistic model (Åkerborg et al., 2009) and has been thoroughly validated on synthetic and biological datasets (Supplementary Material online).

  17. Exploring Space Management Goals in Institutional Care Facilities in China.

    PubMed

    Li, Lingzhi; Yuan, Jingfeng; Ning, Yan; Shao, Qiuhu; Zhang, Jiankun

    2017-01-01

    Space management has been widely examined in commercial facilities, educational facilities, and hospitals but not in China's institutional care facilities. Poor spatial arrangements, such as wasted space, dysfunctionality, and environment mismanagement, are increasing; in turn, the occupancy rate is decreasing due to residential dissatisfaction. To address these problems, this paper's objective is to explore the space management goals (SMGs) in institutional care facilities in China. Systematic literature analysis was adopted to set SMGs' principles, to identify nine theoretical SMGs, and to develop the conceptual model of SMGs for institutional care facilities. A total of 19 intensive interviews were conducted with stakeholders in seven institutional care facilities to collect data for qualitative analysis. The qualitative evidence was analyzed through open coding, axial coding, and selective coding. As a result, six major categories as well as their interrelationships were put forward to visualize the path diagram for exploring SMGs in China's institutional care facilities. Furthermore, seven expected SMGs that were explored from qualitative evidence were confirmed as China's SMGs in institutional care facilities by a validation test. Finally, a gap analysis among theoretical SMGs and China's SMGs provided recommendations for implementing space management in China's institutional care facilities.

  18. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.

    PubMed

    Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting

    2018-02-12

    Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.

  19. The Nuclear Energy Knowledge and Validation Center Summary of Activities Conducted in FY16

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans David

    The Nuclear Energy Knowledge and Validation Center (NEKVaC) is a new initiative by the Department of Energy (DOE) and Idaho National Laboratory (INL) to coordinate and focus the resources and expertise that exist with the DOE toward solving issues in modern nuclear code validation and knowledge management. In time, code owners, users, and developers will view the NEKVaC as a partner and essential resource for acquiring the best practices and latest techniques for validating codes, providing guidance in planning and executing experiments, facilitating access to and maximizing the usefulness of existing data, and preserving knowledge for continual use by nuclearmore » professionals and organizations for their own validation needs. The scope of the NEKVaC covers many interrelated activities that will need to be cultivated carefully in the near term and managed properly once the NEKVaC is fully functional. Three areas comprise the principal mission: (1) identify and prioritize projects that extend the field of validation science and its application to modern codes, (2) develop and disseminate best practices and guidelines for high-fidelity multiphysics/multiscale analysis code development and associated experiment design, and (3) define protocols for data acquisition and knowledge preservation and provide a portal for access to databases currently scattered among numerous organizations. These mission areas, while each having a unique focus, are interdependent and complementary. Likewise, all activities supported by the NEKVaC, both near term and long term, must possess elements supporting all three areas. This cross cutting nature is essential to ensuring that activities and supporting personnel do not become “stove piped” (i.e., focused a specific function that the activity itself becomes the objective rather than achieving the larger vision). This report begins with a description of the mission areas; specifically, the role played by each major committee and the types of activities for which they are responsible. It then lists and describes the proposed near term tasks upon which future efforts can build.« less

  20. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE PAGES

    Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; ...

    2015-09-28

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less

  1. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.

    2016-09-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less

  2. Development and preliminary evaluation of a practice-based learning and improvement tool for assessing resident competence and guiding curriculum development.

    PubMed

    Lawrence, Renée H; Tomolo, Anne M

    2011-03-01

    Although practice-based learning and improvement (PBLI) is now recognized as a fundamental and necessary skill set, we are still in need of tools that yield specific information about gaps in knowledge and application to help nurture the development of quality improvement (QI) skills in physicians in a proficient and proactive manner. We developed a questionnaire and coding system as an assessment tool to evaluate and provide feedback regarding PBLI self-efficacy, knowledge, and application skills for residency programs and related professional requirements. Five nationally recognized QI experts/leaders reviewed and completed our questionnaire. Through an iterative process, a coding system based on identifying key variables needed for ideal responses was developed to score project proposals. The coding system comprised 14 variables related to the QI projects, and an additional 30 variables related to the core knowledge concepts related to PBLI. A total of 86 residents completed the questionnaire, and 2 raters coded their open-ended responses. Interrater reliability was assessed by percentage agreement and Cohen κ for individual variables and Lin concordance correlation for total scores for knowledge and application. Discriminative validity (t test to compare known groups) and coefficient of reproducibility as an indicator of construct validity (item difficulty hierarchy) were also assessed. Interrater reliability estimates were good (percentage of agreements, above 90%; κ, above 0.4 for most variables; concordances for total scores were R  =  .88 for knowledge and R  =  .98 for application). Despite the residents' limited range of experiences in the group with prior PBLI exposure, our tool met our goal of differentiating between the 2 groups in our preliminary analyses. Correcting for chance agreement identified some variables that are potentially problematic. Although additional evaluation is needed, our tool may prove helpful and provide detailed information about trainees' progress and the curriculum.

  3. Development and Preliminary Evaluation of a Practice-Based Learning and Improvement Tool for Assessing Resident Competence and Guiding Curriculum Development

    PubMed Central

    Lawrence, Renée H; Tomolo, Anne M

    2011-01-01

    Background Although practice-based learning and improvement (PBLI) is now recognized as a fundamental and necessary skill set, we are still in need of tools that yield specific information about gaps in knowledge and application to help nurture the development of quality improvement (QI) skills in physicians in a proficient and proactive manner. We developed a questionnaire and coding system as an assessment tool to evaluate and provide feedback regarding PBLI self-efficacy, knowledge, and application skills for residency programs and related professional requirements. Methods Five nationally recognized QI experts/leaders reviewed and completed our questionnaire. Through an iterative process, a coding system based on identifying key variables needed for ideal responses was developed to score project proposals. The coding system comprised 14 variables related to the QI projects, and an additional 30 variables related to the core knowledge concepts related to PBLI. A total of 86 residents completed the questionnaire, and 2 raters coded their open-ended responses. Interrater reliability was assessed by percentage agreement and Cohen κ for individual variables and Lin concordance correlation for total scores for knowledge and application. Discriminative validity (t test to compare known groups) and coefficient of reproducibility as an indicator of construct validity (item difficulty hierarchy) were also assessed. Results Interrater reliability estimates were good (percentage of agreements, above 90%; κ, above 0.4 for most variables; concordances for total scores were R  =  .88 for knowledge and R  =  .98 for application). Conclusion Despite the residents' limited range of experiences in the group with prior PBLI exposure, our tool met our goal of differentiating between the 2 groups in our preliminary analyses. Correcting for chance agreement identified some variables that are potentially problematic. Although additional evaluation is needed, our tool may prove helpful and provide detailed information about trainees' progress and the curriculum. PMID:22379522

  4. Development of the 3DHZETRN code for space radiation protection

    NASA Astrophysics Data System (ADS)

    Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert

    Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.

  5. Quantum Dense Coding About a Two-Qubit Heisenberg XYZ Model

    NASA Astrophysics Data System (ADS)

    Xu, Hui-Yun; Yang, Guo-Hui

    2017-09-01

    By taking into account the nonuniform magnetic field, the quantum dense coding with thermal entangled states of a two-qubit anisotropic Heisenberg XYZ chain are investigated in detail. We mainly show the different properties about the dense coding capacity ( χ) with the changes of different parameters. It is found that dense coding capacity χ can be enhanced by decreasing the magnetic field B, the degree of inhomogeneity b and temperature T, or increasing the coupling constant along z-axis J z . In addition, we also find χ remains the stable value as the change of the anisotropy of the XY plane Δ in a certain temperature condition. Through studying different parameters effect on χ, it presents that we can properly turn the values of B, b, J z , Δ or adjust the temperature T to obtain a valid dense coding capacity ( χ satisfies χ > 1). Moreover, the temperature plays a key role in adjusting the value of dense coding capacity χ. The valid dense coding capacity could be always obtained in the lower temperature-limit case.

  6. NDARC - NASA Design and Analysis of Rotorcraft Validation and Demonstration

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    Validation and demonstration results from the development of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are presented. The principal tasks of NDARC are to design a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft chosen as NDARC development test cases are the UH-60A single main-rotor and tail-rotor helicopter, the CH-47D tandem helicopter, the XH-59A coaxial lift-offset helicopter, and the XV-15 tiltrotor. These aircraft were selected because flight performance data, a weight statement, detailed geometry information, and a correlated comprehensive analysis model are available for each. Validation consists of developing the NDARC models for these aircraft by using geometry and weight information, airframe wind tunnel test data, engine decks, rotor performance tests, and comprehensive analysis results; and then comparing the NDARC results for aircraft and component performance with flight test data. Based on the calibrated models, the capability of the code to size rotorcraft is explored.

  7. Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.

    1990-01-01

    Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less

  8. Impact of Neutrino Opacities on Core-collapse Supernova Simulations

    NASA Astrophysics Data System (ADS)

    Kotake, Kei; Takiwaki, Tomoya; Fischer, Tobias; Nakamura, Ko; Martínez-Pinedo, Gabriel

    2018-02-01

    The accurate description of neutrino opacities is central to both the core-collapse supernova (CCSN) phenomenon and the validity of the explosion mechanism itself. In this work, we study in a systematic fashion the role of a variety of well-selected neutrino opacities in CCSN simulations where the multi-energy, three-flavor neutrino transport is solved using the isotropic diffusion source approximation (IDSA) scheme. To verify our code, we first present results from one-dimensional (1D) simulations following the core collapse, bounce, and ∼250 ms postbounce of a 15 {M}ȯ star using a standard set of neutrino opacities by Bruenn. A detailed comparison with published results supports the reliability of our three-flavor IDSA scheme using the standard opacity set. We then investigate in 1D simulations how individual opacity updates lead to differences with the baseline run with the standard opacity set. Through detailed comparisons with previous work, we check the validity of our implementation of each update in a step-by-step manner. Individual neutrino opacities with the largest impact on the overall evolution in 1D simulations are selected for systematic comparisons in our two-dimensional (2D) simulations. Special attention is given to the criterion of explodability in the 2D models. We discuss the implications of these results as well as its limitations and the requirements for future, more elaborate CCSN modeling.

  9. Validating LES for Jet Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2011-01-01

    Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that result in having dreams come true. This paper primarily addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. It also addresses the latter problem in discussing what are relevant measures critical for aeroacoustics that should be used in validating LES codes. These new diagnostic techniques deliver measurements and flow statistics of increasing sophistication and capability, but what of their accuracy? And what are the measures to be used in validation? This paper argues that the issue of accuracy be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it is argued that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound.

  10. Manual versus automated coding of free-text self-reported medication data in the 45 and Up Study: a validation study.

    PubMed

    Gnjidic, Danijela; Pearson, Sallie-Anne; Hilmer, Sarah N; Basilakis, Jim; Schaffer, Andrea L; Blyth, Fiona M; Banks, Emily

    2015-03-30

    Increasingly, automated methods are being used to code free-text medication data, but evidence on the validity of these methods is limited. To examine the accuracy of automated coding of previously keyed in free-text medication data compared with manual coding of original handwritten free-text responses (the 'gold standard'). A random sample of 500 participants (475 with and 25 without medication data in the free-text box) enrolled in the 45 and Up Study was selected. Manual coding involved medication experts keying in free-text responses and coding using Anatomical Therapeutic Chemical (ATC) codes (i.e. chemical substance 7-digit level; chemical subgroup 5-digit; pharmacological subgroup 4-digit; therapeutic subgroup 3-digit). Using keyed-in free-text responses entered by non-experts, the automated approach coded entries using the Australian Medicines Terminology database and assigned corresponding ATC codes. Based on manual coding, 1377 free-text entries were recorded and, of these, 1282 medications were coded to ATCs manually. The sensitivity of automated coding compared with manual coding was 79% (n = 1014) for entries coded at the exact ATC level, and 81.6% (n = 1046), 83.0% (n = 1064) and 83.8% (n = 1074) at the 5, 4 and 3-digit ATC levels, respectively. The sensitivity of automated coding for blank responses was 100% compared with manual coding. Sensitivity of automated coding was highest for prescription medications and lowest for vitamins and supplements, compared with the manual approach. Positive predictive values for automated coding were above 95% for 34 of the 38 individual prescription medications examined. Automated coding for free-text prescription medication data shows very high to excellent sensitivity and positive predictive values, indicating that automated methods can potentially be useful for large-scale, medication-related research.

  11. Supersonic and hypersonic shock/boundary-layer interaction database

    NASA Technical Reports Server (NTRS)

    Settles, Gary S.; Dodson, Lori J.

    1994-01-01

    An assessment is given of existing shock wave/tubulent boundary-layer interaction experiments having sufficient quality to guide turbulence modeling and code validation efforts. Although the focus of this work is hypersonic, experiments at Mach numbers as low as 3 were considered. The principal means of identifying candidate studies was a computerized search of the AIAA Aerospace Database. Several hundred candidate studies were examined and over 100 of these were subjected to a rigorous set of acceptance criteria for inclusion in the data-base. Nineteen experiments were found to meet these criteria, of which only seven were in the hypersonic regime (M is greater than 5).

  12. Experimental operation of a sodium heat pipe

    NASA Astrophysics Data System (ADS)

    Holtz, R. E.; McLennan, G. A.; Koehl, E. R.

    1985-05-01

    This report documents the operation of a 28 in. long sodium heat pipe in the Heat Pipe Test Facility (HPTF) installed at Argonne National Laboratory. Experimental data were collected to simulate conditions prototypic of both a fluidized bed coal combustor application and a space environment application. Both sets of experiment data show good agreement with the heat pipe analytical model. The heat transfer performance of the heat pipe proved reliable over a substantial period of operation and over much thermal cycling. Additional testing of longer heat pipes under controlled laboratory conditions will be necessary to determine performance limitations and to complete the design code validation.

  13. Development of a Web Tool for Escherichia coli Subtyping Based on fimH Alleles.

    PubMed

    Roer, Louise; Tchesnokova, Veronika; Allesøe, Rosa; Muradova, Mariya; Chattopadhyay, Sujay; Ahrenfeldt, Johanne; Thomsen, Martin C F; Lund, Ole; Hansen, Frank; Hammerum, Anette M; Sokurenko, Evgeni; Hasman, Henrik

    2017-08-01

    The aim of this study was to construct a valid publicly available method for in silico fimH subtyping of Escherichia coli particularly suitable for differentiation of fine-resolution subgroups within clonal groups defined by standard multilocus sequence typing (MLST). FimTyper was constructed as a FASTA database containing all currently known fimH alleles. The software source code is publicly available at https://bitbucket.org/genomicepidemiology/fimtyper, the database is freely available at https://bitbucket.org/genomicepidemiology/fimtyper_db, and a service implementing the software is available at https://cge.cbs.dtu.dk/services/FimTyper FimTyper was validated on three data sets: one containing Sanger sequences of fimH alleles of 42 E. coli isolates generated prior to the current study (data set 1), one containing whole-genome sequence (WGS) data of 243 third-generation-cephalosporin-resistant E. coli isolates (data set 2), and one containing a randomly chosen subset of 40 E. coli isolates from data set 2 that were subjected to conventional fimH subtyping (data set 3). The combination of the three data sets enabled an evaluation and comparison of FimTyper on both Sanger sequences and WGS data. FimTyper correctly predicted all 42 fimH subtypes from the Sanger sequences from data set 1 and successfully analyzed all 243 draft genomes from data set 2. FimTyper subtyping of the Sanger sequences and WGS data from data set 3 were in complete agreement. Additionally, fimH subtyping was evaluated on a phylogenetic network of 122 sequence type 131 (ST131) E. coli isolates. There was perfect concordance between the typology and fimH -based subclones within ST131, with accurate identification of the pandemic multidrug-resistant clonal subgroup ST131- H 30. FimTyper provides a standardized tool, as a rapid alternative to conventional fimH subtyping, highly suitable for surveillance and outbreak detection. Copyright © 2017 American Society for Microbiology.

  14. EAC: A program for the error analysis of STAGS results for plates

    NASA Technical Reports Server (NTRS)

    Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.

    1989-01-01

    A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.

  15. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  16. Monte Carlo reference data sets for imaging research: Executive summary of the report of AAPM Research Committee Task Group 195.

    PubMed

    Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C

    2015-10-01

    The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.

  17. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  18. Open Rotor Aeroacoustic Modeling

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2012-01-01

    Owing to their inherent fuel efficiency, there is renewed interest in developing open rotor propulsion systems that are both efficient and quiet. The major contributor to the overall noise of an open rotor system is the propulsor noise, which is produced as a result of the interaction of the airstream with the counter-rotating blades. As such, robust aeroacoustic prediction methods are an essential ingredient in any approach to designing low-noise open rotor systems. To that end, an effort has been underway at NASA to assess current open rotor noise prediction tools and develop new capabilities. Under this effort, high-fidelity aerodynamic simulations of a benchmark open rotor blade set were carried out and used to make noise predictions via existing NASA open rotor noise prediction codes. The results have been compared with the aerodynamic and acoustic data that were acquired for this benchmark open rotor blade set. The emphasis of this paper is on providing a summary of recent results from a NASA Glenn effort to validate an in-house open noise prediction code called LINPROP which is based on a high-blade-count asymptotic approximation to the Ffowcs-Williams Hawkings Equation. The results suggest that while predicting the absolute levels may be difficult, the noise trends are reasonably well predicted by this approach.

  19. Open Rotor Aeroacoustic Modelling

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2012-01-01

    Owing to their inherent fuel efficiency, there is renewed interest in developing open rotor propulsion systems that are both efficient and quiet. The major contributor to the overall noise of an open rotor system is the propulsor noise, which is produced as a result of the interaction of the airstream with the counter-rotating blades. As such, robust aeroacoustic prediction methods are an essential ingredient in any approach to designing low-noise open rotor systems. To that end, an effort has been underway at NASA to assess current open rotor noise prediction tools and develop new capabilities. Under this effort, high-fidelity aerodynamic simulations of a benchmark open rotor blade set were carried out and used to make noise predictions via existing NASA open rotor noise prediction codes. The results have been compared with the aerodynamic and acoustic data that were acquired for this benchmark open rotor blade set. The emphasis of this paper is on providing a summary of recent results from a NASA Glenn effort to validate an in-house open noise prediction code called LINPROP which is based on a high-blade-count asymptotic approximation to the Ffowcs-Williams Hawkings Equation. The results suggest that while predicting the absolute levels may be difficult, the noise trends are reasonably well predicted by this approach.

  20. An experimental investigation of multi-element airfoil ice accretion and resulting performance degradation

    NASA Technical Reports Server (NTRS)

    Potapczuk, Mark G.; Berkowitz, Brian M.

    1989-01-01

    An investigation of the ice accretion pattern and performance characteristics of a multi-element airfoil was undertaken in the NASA Lewis 6- by 9-Foot Icing Research Tunnel. Several configurations of main airfoil, slat, and flaps were employed to examine the effects of ice accretion and provide further experimental information for code validation purposes. The text matrix consisted of glaze, rime, and mixed icing conditions. Airflow and icing cloud conditions were set to correspond to those typical of the operating environment anticipated tor a commercial transport vehicle. Results obtained included ice profile tracings, photographs of the ice accretions, and force balance measurements obtained both during the accretion process and in a post-accretion evaluation over a range of angles of attack. The tracings and photographs indicated significant accretions on the slat leading edge, in gaps between slat or flaps and the main wing, on the flap leading-edge surfaces, and on flap lower surfaces. Force measurments indicate the possibility of severe performance degradation, especially near C sub Lmax, for both light and heavy ice accretion and performance analysis codes presently in use. The LEWICE code was used to evaluate the ice accretion shape developed during one of the rime ice tests. The actual ice shape was then evaluated, using a Navier-Strokes code, for changes in performance characteristics. These predicted results were compared to the measured results and indicate very good agreement.

  1. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... sets inherent to a transaction, and not related to the format of the transaction. Data elements that... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  2. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... sets inherent to a transaction, and not related to the format of the transaction. Data elements that... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  3. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... sets inherent to a transaction, and not related to the format of the transaction. Data elements that... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  4. Development of PRIME for irradiation performance analysis of U-Mo/Al dispersion fuel

    NASA Astrophysics Data System (ADS)

    Jeong, Gwan Yoon; Kim, Yeon Soo; Jeong, Yong Jin; Park, Jong Man; Sohn, Dong-Seong

    2018-04-01

    A prediction code for the thermo-mechanical performance of research reactor fuel (PRIME) has been developed with the implementation of developed models to analyze the irradiation behavior of U-Mo dispersion fuel. The code is capable of predicting the two-dimensional thermal and mechanical performance of U-Mo dispersion fuel during irradiation. A finite element method was employed to solve the governing equations for thermal and mechanical equilibria. Temperature- and burnup-dependent material properties of the fuel meat constituents and cladding were used. The numerical solution schemes in PRIME were verified by benchmarking solutions obtained using a commercial finite element analysis program (ABAQUS). The code was validated using irradiation data from RERTR, HAMP-1, and E-FUTURE tests. The measured irradiation data used in the validation were IL thickness, volume fractions of fuel meat constituents for the thermal analysis, and profiles of the plate thickness changes and fuel meat swelling for the mechanical analysis. The prediction results were in good agreement with the measurement data for both thermal and mechanical analyses, confirming the validity of the code.

  5. Deep learning for galaxy surface brightness profile fitting

    NASA Astrophysics Data System (ADS)

    Tuccillo, D.; Huertas-Company, M.; Decencière, E.; Velasco-Forero, S.; Domínguez Sánchez, H.; Dimauro, P.

    2018-03-01

    Numerous ongoing and future large area surveys (e.g. Dark Energy Survey, EUCLID, Large Synoptic Survey Telescope, Wide Field Infrared Survey Telescope) will increase by several orders of magnitude the volume of data that can be exploited for galaxy morphology studies. The full potential of these surveys can be unlocked only with the development of automated, fast, and reliable analysis methods. In this paper, we present DeepLeGATo, a new method for 2-D photometric galaxy profile modelling, based on convolutional neural networks. Our code is trained and validated on analytic profiles (HST/CANDELS F160W filter) and it is able to retrieve the full set of parameters of one-component Sérsic models: total magnitude, effective radius, Sérsic index, and axis ratio. We show detailed comparisons between our code and GALFIT. On simulated data, our method is more accurate than GALFIT and ˜3000 time faster on GPU (˜50 times when running on the same CPU). On real data, DeepLeGATo trained on simulations behaves similarly to GALFIT on isolated galaxies. With a fast domain adaptation step made with the 0.1-0.8 per cent the size of the training set, our code is easily capable to reproduce the results obtained with GALFIT even on crowded regions. DeepLeGATo does not require any human intervention beyond the training step, rendering it much automated than traditional profiling methods. The development of this method for more complex models (two-component galaxies, variable point spread function, dense sky regions) could constitute a fundamental tool in the era of big data in astronomy.

  6. The skyshine benchmark experiment revisited.

    PubMed

    Terry, Ian R

    2005-01-01

    With the coming renaissance of nuclear power, heralded by new nuclear power plant construction in Finland, the issue of qualifying modern tools for calculation becomes prominent. Among the calculations required may be the determination of radiation levels outside the plant owing to skyshine. For example, knowledge of the degree of accuracy in the calculation of gamma skyshine through the turbine hall roof of a BWR plant is important. Modern survey programs which can calculate skyshine dose rates tend to be qualified only by verification with the results of Monte Carlo calculations. However, in the past, exacting experimental work has been performed in the field for gamma skyshine, notably the benchmark work in 1981 by Shultis and co-workers, which considered not just the open source case but also the effects of placing a concrete roof above the source enclosure. The latter case is a better reflection of reality as safety considerations nearly always require the source to be shielded in some way, usually by substantial walls but by a thinner roof. One of the tools developed since that time, which can both calculate skyshine radiation and accurately model the geometrical set-up of an experiment, is the code RANKERN, which is used by Framatome ANP and other organisations for general shielding design work. The following description concerns the use of this code to re-address the experimental results from 1981. This then provides a realistic gauge to validate, but also to set limits on, the program for future gamma skyshine applications within the applicable licensing procedures for all users of the code.

  7. SARAH 4: A tool for (not only SUSY) model builders

    NASA Astrophysics Data System (ADS)

    Staub, Florian

    2014-06-01

    We present the new version of the Mathematica package SARAH which provides the same features for a non-supersymmetric model as previous versions for supersymmetric models. This includes an easy and straightforward definition of the model, the calculation of all vertices, mass matrices, tadpole equations, and self-energies. Also the two-loop renormalization group equations for a general gauge theory are now included and have been validated with the independent Python code PyR@TE. Model files for FeynArts, CalcHep/CompHep, WHIZARD and in the UFO format can be written, and source code for SPheno for the calculation of the mass spectrum, a set of precision observables, and the decay widths and branching ratios of all states can be generated. Furthermore, the new version includes routines to output model files for Vevacious for both, supersymmetric and non-supersymmetric, models. Global symmetries are also supported with this version and by linking Susyno the handling of Lie groups has been improved and extended.

  8. A Summary of Validation Results for LEWICE 2.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1998-01-01

    A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different point spacing, and time step criteria across general computing platforms. It also differs in the extensive amount of effort undertaken to compare the results in a quantifiable manner against the database of ice shapes which have been generated in the NASA Lewis Icing, Research Tunnel (IRT), The complete set of data used for this comparison is available in a recent contractor report . The result of this comparison shows that the difference between the predicted ice shape from LEWICE 2.0 and the average of the experimental data is 7.2% while the variability of the experimental data is 2.5%.

  9. Modeling Longitudinal Dynamics in the Fermilab Booster Synchrotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostiguy, Jean-Francois; Bhat, Chandra; Lebedev, Valeri

    2016-06-01

    The PIP-II project will replace the existing 400 MeV linac with a new, CW-capable, 800 MeV superconducting one. With respect to current operations, a 50% increase in beam intensity in the rapid cycling Booster synchrotron is expected. Booster batches are combined in the Recycler ring; this process limits the allowed longitudinal emittance of the extracted Booster beam. To suppress eddy currents, the Booster has no beam pipe; magnets are evacuated, exposing the beam to core laminations and this has a substantial impact on the longitudinal impedance. Noticeable longitudinal emittance growth is already observed at transition crossing. Operation at higher intensitymore » will likely necessitate mitigation measures. We describe systematic efforts to construct a predictive model for current operating conditions. A longitudinal only code including a laminated wall impedance model, space charge effects, and feedback loops is developed. Parameter validation is performed using detailed measurements of relevant beam, rf and control parameters. An attempt is made to benchmark the code at operationally favorable machine settings.« less

  10. Identification of Phosphorylation Codes for Arrestin Recruitment by G Protein-Coupled Receptors.

    PubMed

    Zhou, X Edward; He, Yuanzheng; de Waal, Parker W; Gao, Xiang; Kang, Yanyong; Van Eps, Ned; Yin, Yanting; Pal, Kuntal; Goswami, Devrishi; White, Thomas A; Barty, Anton; Latorraca, Naomi R; Chapman, Henry N; Hubbell, Wayne L; Dror, Ron O; Stevens, Raymond C; Cherezov, Vadim; Gurevich, Vsevolod V; Griffin, Patrick R; Ernst, Oliver P; Melcher, Karsten; Xu, H Eric

    2017-07-27

    G protein-coupled receptors (GPCRs) mediate diverse signaling in part through interaction with arrestins, whose binding promotes receptor internalization and signaling through G protein-independent pathways. High-affinity arrestin binding requires receptor phosphorylation, often at the receptor's C-terminal tail. Here, we report an X-ray free electron laser (XFEL) crystal structure of the rhodopsin-arrestin complex, in which the phosphorylated C terminus of rhodopsin forms an extended intermolecular β sheet with the N-terminal β strands of arrestin. Phosphorylation was detected at rhodopsin C-terminal tail residues T336 and S338. These two phospho-residues, together with E341, form an extensive network of electrostatic interactions with three positively charged pockets in arrestin in a mode that resembles binding of the phosphorylated vasopressin-2 receptor tail to β-arrestin-1. Based on these observations, we derived and validated a set of phosphorylation codes that serve as a common mechanism for phosphorylation-dependent recruitment of arrestins by GPCRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Prediction of protein-protein interactions based on PseAA composition and hybrid feature selection.

    PubMed

    Liu, Liang; Cai, Yudong; Lu, Wencong; Feng, Kaiyan; Peng, Chunrong; Niu, Bing

    2009-03-06

    Based on pseudo amino acid (PseAA) composition and a novel hybrid feature selection frame, this paper presents a computational system to predict the PPIs (protein-protein interactions) using 8796 protein pairs. These pairs are coded by PseAA composition, resulting in 114 features. A hybrid feature selection system, mRMR-KNNs-wrapper, is applied to obtain an optimized feature set by excluding poor-performed and/or redundant features, resulting in 103 remaining features. Using the optimized 103-feature subset, a prediction model is trained and tested in the k-nearest neighbors (KNNs) learning system. This prediction model achieves an overall accurate prediction rate of 76.18%, evaluated by 10-fold cross-validation test, which is 1.46% higher than using the initial 114 features and is 6.51% higher than the 20 features, coded by amino acid compositions. The PPIs predictor, developed for this research, is available for public use at http://chemdata.shu.edu.cn/ppi.

  12. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.

    PubMed

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-10-27

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.

  13. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements

    PubMed Central

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-01-01

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998

  14. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  15. Generalized fluid theory including non-Maxwellian kinetic effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier

    The results obtained by the plasma physics community for the validation and the prediction of turbulence and transport in magnetized plasmas come mainly from the use of very central processing unit (CPU)-consuming particle-in-cell or (gyro)kinetic codes which naturally include non-Maxwellian kinetic effects. To date, fluid codes are not considered to be relevant for the description of these kinetic effects. Here, after revisiting the limitations of the current fluid theory developed in the 19th century, we generalize the fluid theory including kinetic effects such as non-Maxwellian super-thermal tails with as few fluid equations as possible. The collisionless and collisional fluid closuresmore » from the nonlinear Landau Fokker–Planck collision operator are shown for an arbitrary collisionality. Indeed, the first fluid models associated with two examples of collisionless fluid closures are obtained by assuming an analytic non-Maxwellian distribution function. One of the main differences with the literature is our analytic representation of the distribution function in the velocity phase space with as few hidden variables as possible thanks to the use of non-orthogonal basis sets. These new non-Maxwellian fluid equations could initiate the next generation of fluid codes including kinetic effects and can be expanded to other scientific disciplines such as astrophysics, condensed matter or hydrodynamics. As a validation test, we perform a numerical simulation based on a minimal reduced INMDF fluid model. The result of this test is the discovery of the origin of particle and heat diffusion. The diffusion is due to the competition between a growing INMDF on short time scales due to spatial gradients and the thermalization on longer time scales. Here, the results shown here could provide the insights to break some of the unsolved puzzles of turbulence.« less

  16. Generalized fluid theory including non-Maxwellian kinetic effects

    DOE PAGES

    Izacard, Olivier

    2017-03-29

    The results obtained by the plasma physics community for the validation and the prediction of turbulence and transport in magnetized plasmas come mainly from the use of very central processing unit (CPU)-consuming particle-in-cell or (gyro)kinetic codes which naturally include non-Maxwellian kinetic effects. To date, fluid codes are not considered to be relevant for the description of these kinetic effects. Here, after revisiting the limitations of the current fluid theory developed in the 19th century, we generalize the fluid theory including kinetic effects such as non-Maxwellian super-thermal tails with as few fluid equations as possible. The collisionless and collisional fluid closuresmore » from the nonlinear Landau Fokker–Planck collision operator are shown for an arbitrary collisionality. Indeed, the first fluid models associated with two examples of collisionless fluid closures are obtained by assuming an analytic non-Maxwellian distribution function. One of the main differences with the literature is our analytic representation of the distribution function in the velocity phase space with as few hidden variables as possible thanks to the use of non-orthogonal basis sets. These new non-Maxwellian fluid equations could initiate the next generation of fluid codes including kinetic effects and can be expanded to other scientific disciplines such as astrophysics, condensed matter or hydrodynamics. As a validation test, we perform a numerical simulation based on a minimal reduced INMDF fluid model. The result of this test is the discovery of the origin of particle and heat diffusion. The diffusion is due to the competition between a growing INMDF on short time scales due to spatial gradients and the thermalization on longer time scales. Here, the results shown here could provide the insights to break some of the unsolved puzzles of turbulence.« less

  17. Use of electronic health records to ascertain, validate and phenotype acute myocardial infarction: A systematic review and recommendations.

    PubMed

    Rubbo, Bruna; Fitzpatrick, Natalie K; Denaxas, Spiros; Daskalopoulou, Marina; Yu, Ning; Patel, Riyaz S; Hemingway, Harry

    2015-01-01

    Electronic health records (EHRs) offer the opportunity to ascertain clinical outcomes at large scale and low cost, thus facilitating cohort studies, quality of care research and clinical trials. For acute myocardial infarction (AMI) the extent to which different EHR sources are accessible and accurate remains uncertain. Using MEDLINE and EMBASE we identified thirty three studies, reporting a total of 128658 patients, published between January 2000 and July 2014 that permitted assessment of the validity of AMI diagnosis drawn from EHR sources against a reference such as manual chart review. In contrast to clinical practice, only one study used EHR-derived markers of myocardial necrosis to identify possible AMI cases, none used electrocardiogram findings and one used symptoms in the form of free text combined with coded diagnosis. The remaining studies relied mostly on coded diagnosis. Thirty one studies reported positive predictive value (PPV)≥ 70% between AMI diagnosis from both secondary care and primary care EHRs and the reference. Among fifteen studies reporting EHR-derived AMI phenotypes, three cross-referenced ST-segment elevation AMI diagnosis (PPV range 71-100%), two non-ST-segment elevation AMI (PPV 91.0, 92.1%), three non-fatal AMI (PPV range 82-92.2%) and six fatal AMI (PPV range 64-91.7%). Clinical coding of EHR-derived AMI diagnosis in primary care and secondary care was found to be accurate in different clinical settings and for different phenotypes. However, markers of myocardial necrosis, ECG and symptoms, the cornerstones of a clinical diagnosis, are underutilised and remain a challenge to retrieve from EHRs. Copyright © 2015. Published by Elsevier Ireland Ltd.

  18. Fiber Optic Distributed Sensors for High-resolution Temperature Field Mapping.

    PubMed

    Lomperski, Stephen; Gerardi, Craig; Lisowski, Darius

    2016-11-07

    The reliability of computational fluid dynamics (CFD) codes is checked by comparing simulations with experimental data. A typical data set consists chiefly of velocity and temperature readings, both ideally having high spatial and temporal resolution to facilitate rigorous code validation. While high resolution velocity data is readily obtained through optical measurement techniques such as particle image velocimetry, it has proven difficult to obtain temperature data with similar resolution. Traditional sensors such as thermocouples cannot fill this role, but the recent development of distributed sensing based on Rayleigh scattering and swept-wave interferometry offers resolution suitable for CFD code validation work. Thousands of temperature measurements can be generated along a single thin optical fiber at hundreds of Hertz. Sensors function over large temperature ranges and within opaque fluids where optical techniques are unsuitable. But this type of sensor is sensitive to strain and humidity as well as temperature and so accuracy is affected by handling, vibration, and shifts in relative humidity. Such behavior is quite unlike traditional sensors and so unconventional installation and operating procedures are necessary to ensure accurate measurements. This paper demonstrates implementation of a Rayleigh scattering-type distributed temperature sensor in a thermal mixing experiment involving two air jets at 25 and 45 °C. We present criteria to guide selection of optical fiber for the sensor and describe installation setup for a jet mixing experiment. We illustrate sensor baselining, which links readings to an absolute temperature standard, and discuss practical issues such as errors due to flow-induced vibration. This material can aid those interested in temperature measurements having high data density and bandwidth for fluid dynamics experiments and similar applications. We highlight pitfalls specific to these sensors for consideration in experiment design and operation.

  19. Radioactive waste isolation in salt: special advisory report on the status of the Office of Nuclear Waste Isolation's plans for repository performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ditmars, J.D.; Walbridge, E.W.; Rote, D.M.

    1983-10-01

    Repository performance assessment is analysis that identifies events and processes that might affect a repository system for isolation of radioactive waste, examines their effects on barriers to waste migration, and estimates the probabilities of their occurrence and their consequences. In 1983 Battelle Memorial Institute's Office of Nuclear Waste Isolation (ONWI) prepared two plans - one for performance assessment for a waste repository in salt and one for verification and validation of performance assessment technology. At the request of the US Department of Energy's Salt Repository Project Office (SRPO), Argonne National Laboratory reviewed those plans and prepared this report to advisemore » SRPO of specific areas where ONWI's plans for performance assessment might be improved. This report presents a framework for repository performance assessment that clearly identifies the relationships among the disposal problems, the processes underlying the problems, the tools for assessment (computer codes), and the data. In particular, the relationships among important processes and 26 model codes available to ONWI are indicated. A common suggestion for computer code verification and validation is the need for specific and unambiguous documentation of the results of performance assessment activities. A major portion of this report consists of status summaries of 27 model codes indicated as potentially useful by ONWI. The code summaries focus on three main areas: (1) the code's purpose, capabilities, and limitations; (2) status of the elements of documentation and review essential for code verification and validation; and (3) proposed application of the code for performance assessment of salt repository systems. 15 references, 6 figures, 4 tables.« less

  20. Evaluating a Dental Diagnostic Terminology in an Electronic Health Record

    PubMed Central

    White, Joel M.; Kalenderian, Elsbeth; Stark, Paul C.; Ramoni, Rachel L.; Vaderhobli, Ram; Walji, Muhammad F.

    2011-01-01

    Standardized treatment procedure codes and terms are routinely used in dentistry. Utilization of a diagnostic terminology is common in medicine, but there is not a satisfactory or commonly standardized dental diagnostic terminology available at this time. Recent advances in dental informatics have provided an opportunity for inclusion of diagnostic codes and terms as part of treatment planning and documentation in the patient treatment history. This article reports the results of the use of a diagnostic coding system in a large dental school’s predoctoral clinical practice. A list of diagnostic codes and terms, called Z codes, was developed by dental faculty members. The diagnostic codes and terms were implemented into an electronic health record (EHR) for use in a predoctoral dental clinic. The utilization of diagnostic terms was quantified. The validity of Z code entry was evaluated by comparing the diagnostic term entered to the procedure performed, where valid diagnosis-procedure associations were determined by consensus among three calibrated academically based dentists. A total of 115,004 dental procedures were entered into the EHR during the year sampled. Of those, 43,053 were excluded from this analysis because they represent diagnosis or other procedures unrelated to treatments. Among the 71,951 treatment procedures, 27,973 had diagnoses assigned to them with an overall utilization of 38.9 percent. Of the 147 available Z codes, ninety-three were used (63.3 percent). There were 335 unique procedures provided and 2,127 procedure/diagnosis pairs captured in the EHR. Overall, 76.7 percent of the diagnoses entered were valid. We conclude that dental diagnostic terminology can be incorporated within an electronic health record and utilized in an academic clinical environment. Challenges remain in the development of terms and implementation and ease of use that, if resolved, would improve the utilization. PMID:21546594

Top