The Nuclear Energy Knowledge and Validation Center Summary of Activities Conducted in FY16
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans David
The Nuclear Energy Knowledge and Validation Center (NEKVaC) is a new initiative by the Department of Energy (DOE) and Idaho National Laboratory (INL) to coordinate and focus the resources and expertise that exist with the DOE toward solving issues in modern nuclear code validation and knowledge management. In time, code owners, users, and developers will view the NEKVaC as a partner and essential resource for acquiring the best practices and latest techniques for validating codes, providing guidance in planning and executing experiments, facilitating access to and maximizing the usefulness of existing data, and preserving knowledge for continual use by nuclearmore » professionals and organizations for their own validation needs. The scope of the NEKVaC covers many interrelated activities that will need to be cultivated carefully in the near term and managed properly once the NEKVaC is fully functional. Three areas comprise the principal mission: (1) identify and prioritize projects that extend the field of validation science and its application to modern codes, (2) develop and disseminate best practices and guidelines for high-fidelity multiphysics/multiscale analysis code development and associated experiment design, and (3) define protocols for data acquisition and knowledge preservation and provide a portal for access to databases currently scattered among numerous organizations. These mission areas, while each having a unique focus, are interdependent and complementary. Likewise, all activities supported by the NEKVaC, both near term and long term, must possess elements supporting all three areas. This cross cutting nature is essential to ensuring that activities and supporting personnel do not become “stove piped” (i.e., focused a specific function that the activity itself becomes the objective rather than achieving the larger vision). This report begins with a description of the mission areas; specifically, the role played by each major committee and the types of activities for which they are responsible. It then lists and describes the proposed near term tasks upon which future efforts can build.« less
Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N
2016-04-01
The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.
A Comprehensive Validation Approach Using The RAVEN Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J
2015-06-01
The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less
Summary of EASM Turbulence Models in CFL3D With Validation Test Cases
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Gatski, Thomas B.
2003-01-01
This paper summarizes the Explicit Algebraic Stress Model in k-omega form (EASM-ko) and in k-epsilon form (EASM-ke) in the Reynolds-averaged Navier-Stokes code CFL3D. These models have been actively used over the last several years in CFL3D, and have undergone some minor modifications during that time. Details of the equations and method for coding the latest versions of the models are given, and numerous validation cases are presented. This paper serves as a validation archive for these models.
Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van
2018-04-01
In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Sumner, Tyler S.
2016-04-17
An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less
Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems
NASA Astrophysics Data System (ADS)
Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.
2008-08-01
This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.
Computational Modeling and Validation for Hypersonic Inlets
NASA Technical Reports Server (NTRS)
Povinelli, Louis A.
1996-01-01
Hypersonic inlet research activity at NASA is reviewed. The basis for the paper is the experimental tests performed with three inlets: the NASA Lewis Research Center Mach 5, the McDonnell Douglas Mach 12, and the NASA Langley Mach 18. Both three-dimensional PNS and NS codes have been used to compute the flow within the three inlets. Modeling assumptions in the codes involve the turbulence model, the nature of the boundary layer, shock wave-boundary layer interaction, and the flow spilled to the outside of the inlet. Use of the codes and the experimental data are helping to develop a clearer understanding of the inlet flow physics and to focus on the modeling improvements required in order to arrive at validated codes.
GEANT4 benchmark with MCNPX and PHITS for activation of concrete
NASA Astrophysics Data System (ADS)
Tesse, Robin; Stichelbaut, Frédéric; Pauly, Nicolas; Dubus, Alain; Derrien, Jonathan
2018-02-01
The activation of concrete is a real problem from the point of view of waste management. Because of the complexity of the issue, Monte Carlo (MC) codes have become an essential tool to its study. But various codes or even nuclear models exist in MC. MCNPX and PHITS have already been validated for shielding studies but GEANT4 is also a suitable solution. In these codes, different models can be considered for a concrete activation study. The Bertini model is not the best model for spallation while BIC and INCL model agrees well with previous results in literature.
NASA Astrophysics Data System (ADS)
Frisoni, Manuela
2017-09-01
ANITA-IEAF is an activation package (code and libraries) developed in the past in ENEA-Bologna in order to assess the activation of materials exposed to neutrons with energies greater than 20 MeV. An updated version of the ANITA-IEAF activation code package has been developed. It is suitable to be applied to the study of the irradiation effects on materials in facilities like the International Fusion Materials Irradiation Facility (IFMIF) and the DEMO Oriented Neutron Source (DONES), in which a considerable amount of neutrons with energies above 20 MeV is produced. The present paper summarizes the main characteristics of the updated version of ANITA-IEAF, able to use decay and cross section data based on more recent evaluated nuclear data libraries, i.e. the JEFF-3.1.1 Radioactive Decay Data Library and the EAF-2010 neutron activation cross section library. In this paper the validation effort related to the comparison between the code predictions and the activity measurements obtained from the Karlsruhe Isochronous Cyclotron is presented. In this integral experiment samples of two different steels, SS-316 and F82H, pure vanadium and a vanadium alloy, structural materials of interest in fusion technology, were activated in a neutron spectrum similar to the IFMIF neutron field.
Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael
This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less
NASA Astrophysics Data System (ADS)
Frisoni, Manuela
2016-03-01
ANITA-2000 is a code package for the activation characterization of materials exposed to neutron irradiation released by ENEA to OECD-NEADB and ORNL-RSICC. The main component of the package is the activation code ANITA-4M that computes the radioactive inventory of a material exposed to neutron irradiation. The code requires the decay data library (file fl1) containing the quantities describing the decay properties of the unstable nuclides and the library (file fl2) containing the gamma ray spectra emitted by the radioactive nuclei. The fl1 and fl2 files of the ANITA-2000 code package, originally based on the evaluated nuclear data library FENDL/D-2.0, were recently updated on the basis of the JEFF-3.1.1 Radioactive Decay Data Library. This paper presents the results of the validation of the new fl1 decay data library through the comparison of the ANITA-4M calculated values with the measured electron and photon decay heats and activities of fusion material samples irradiated at the 14 MeV Frascati Neutron Generator (FNG) of the NEA-Frascati Research Centre. Twelve material samples were considered, namely: Mo, Cu, Hf, Mg, Ni, Cd, Sn, Re, Ti, W, Ag and Al. The ratios between calculated and experimental values (C/E) are shown and discussed in this paper.
Verification and Validation of the BISON Fuel Performance Code for PCMI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, Kyle Allan Lawrence; Novascone, Stephen Rhead; Gardner, Russell James
2016-06-01
BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described. Validation for application to light water reactor (LWR) PCMI problems is assessed by comparing predicted and measured rod diameter following base irradiation andmore » power ramps. Results indicate a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. Initial rod diameter comparisons have led to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less
The Nuclear Energy Knowledge and Validation Center – Summary of Activities Conducted in FY15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans David; Hong, Bonnie Colleen
2016-05-01
The Nuclear Energy Knowledge and Validation Center (NEKVaC) is a new initiative by the Department of Energy and the Idaho National Laboratory to coordinate and focus the resources and expertise that exist with the DOE Complex toward solving issues in modern nuclear code validation. In time, code owners, users, and developers will view the Center as a partner and essential resource for acquiring the best practices and latest techniques for validating codes, for guidance in planning and executing experiments, for facilitating access to, and maximizing the usefulness of, existing data, and for preserving knowledge for continual use by nuclear professionalsmore » and organizations for their own validation needs. The scope of the center covers many inter-related activities which will need to be cultivated carefully in the near-term and managed properly once the Center is fully functional. Three areas comprise the principal mission: 1) identification and prioritization of projects that extend the field of validation science and its application to modern codes, 2) adapt or develop best practices and guidelines for high fidelity multiphysics/multiscale analysis code development and associated experiment design, and 3) define protocols for data acquisition and knowledge preservation and provide a portal for access to databases currently scattered among numerous organizations. These mission areas, while each having a unique focus, are inter-dependent and complementary. Likewise, all activities supported by the NEKVaC, both near-term and long-term), must possess elements supporting all three. This cross-cutting nature is essential to ensuring that activities and supporting personnel do not become ‘stove-piped’, i.e. focused so much on a specific function that the activity itself becomes the objective rather than the achieving the larger vision. Achieving the broader vision will require a healthy and accountable level of activity in each of the areas. This will take time and significant DOE support. Growing too fast (budget-wise) will not allow ideas to mature, lessons to be learned, and taxpayer money to be spent responsibly. The process should be initiated with a small set of tasks, executed over a short but reasonable term, that will exercise most if not all aspects of the Center’s potential operation. The initial activities described in this report have a high potential for near-term success in demonstrating Center objectives but also to work out some of the issues in task execution, communication between functional elements, and the ability to raise awareness of the Center and cement stakeholder buy-in. This report begins with a description of the Mission areas; specifically the role played by each and the types of activities for which they are responsible. It then lists and describes the proposed near-term tasks upon which future efforts can build.« less
Measuring homework completion in behavioral activation.
Busch, Andrew M; Uebelacker, Lisa A; Kalibatseva, Zornitsa; Miller, Ivan W
2010-07-01
The aim of this study was to develop and validate an observer-based coding system for the characterization and completion of homework assignments during Behavioral Activation (BA). Existing measures of homework completion are generally unsophisticated, and there is no current measure of homework completion designed to capture the particularities of BA. The tested scale sought to capture the type of assignment, realm of functioning targeted, extent of completion, and assignment difficulty. Homework assignments were drawn from 12 (mean age = 48, 83% female) clients in two trials of a 10-session BA manual targeting treatment-resistant depression in primary care. The two coders demonstrated acceptable or better reliability on most codes, and unreliable codes were dropped from the proposed scale. In addition, correlations between homework completion and outcome were strong, providing some support for construct validity. Ultimately, this line of research aims to develop a user-friendly, reliable measure of BA homework completion that can be completed by a therapist during session.
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare Department of Health and Human Services ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
NASA Astrophysics Data System (ADS)
Mylnikova, Anna; Yasyukevich, Yury; Yasyukevich, Anna
2017-04-01
We have developed a technique for vertical total electron content (TEC) and differential code biases (DCBs) estimation using data from a single GPS/GLONASS station. The algorithm is based on TEC expansion into Taylor series in space and time (TayAbsTEC). We perform the validation of the technique using Global Ionospheric Maps (GIM) computed by Center for Orbit Determination in Europe (CODE) and Jet Propulsion Laboratory (JPL). We compared differences between absolute vertical TEC (VTEC) from GIM and VTEC evaluated by TayAbsTEC for 2009 year (solar activity minimum - sunspot number about 0), and for 2014 year (solar activity maximum - sunspot number 110). Since there is difference between VTEC from CODE and VTEC from JPL, we compare TayAbsTEC VTEC with both of them. We found that TayAbsTEC VTEC is closer to CODE VTEC than to JPL VTEC. The difference between TayAbsTEC VTEC and GIM VTEC is more noticeable for solar activity maximum (2014) than for solar activity minimum (2009) for both CODE and JPL. The distribution of VTEC differences is close to Gaussian distribution, so we conclude that results of TayAbsTEC are in the agreement with GIM VTEC. We also compared DCBs evaluated by TayAbsTEC and DCBs from GIM, computed by CODE. The TayAbsTEC DCBs are in good agreement with CODE DCBs for GPS satellites, but differ noticeable for GLONASS. We used DCBs to correct slant TEC to find out which DCBs give better results. Slant TEC correction with CODE DCBs produces negative and nonphysical TEC values. Slant TEC correction with TayAbsTEC DCBs doesn't produce such artifacts. The technique we developed is used for VTEC and DCBs calculation given only local GPS/GLONASS networks data. The evaluated VTEC data are in GIM framework which is handy when various data analyses are made.
Validating the BISON fuel performance code to integral LWR experiments
Williamson, R. L.; Gamble, K. A.; Perez, D. M.; ...
2016-03-24
BISON is a modern finite element-based nuclear fuel performance code that has been under development at the Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to datemore » for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Our results demonstrate that 1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, 2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and 3) comparison of rod diameter results indicates a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. In the initial rod diameter comparisons they were unsatisfactory and have lead to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less
Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F
2017-10-01
One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1 = 68.8%; M T2 = 73.9%), and were poor at the descriptor level (M T1 = 58.5%; M T2 = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1 = 73.9%; M T2 = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1 = 67.6%; M T2 = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
Martin, Caroline J Hollins; Kenney, Laurence; Pratt, Thomas; Granat, Malcolm H
2015-01-01
There is limited understanding of the type and extent of maternal postures that midwives should encourage or support during labor. The aims of this study were to identify a set of postures and movements commonly seen during labor, to develop an activity monitoring system for use during labor, and to validate this system design. Volunteer student midwives simulated maternal activity during labor in a laboratory setting. Participants (N = 15) wore monitors adhered to the left thigh and left shank, and adopted 13 common postures of laboring women for 3 minutes each. Simulated activities were recorded using a video camera. Postures and movements were coded from the video, and statistical analysis conducted of agreement between coded video data and outputs of the activity monitoring system. Excellent agreement between the 2 raters of the video recordings was found (Cohen's κ = 0.95). Both sensitivity and specificity of the activity monitoring system were greater than 80% for standing, lying, kneeling, and sitting (legs dangling). This validated system can be used to measure elected activity of laboring women and report on effects of postures on length of first stage, pain experience, birth satisfaction, and neonatal condition. This validated maternal posture-monitoring system is available as a reference-and for use by researchers who wish to develop research in this area. © 2015 by the American College of Nurse-Midwives.
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
Remarks on CFD validation: A Boeing Commercial Airplane Company perspective
NASA Technical Reports Server (NTRS)
Rubbert, Paul E.
1987-01-01
Requirements and meaning of validation of computational fluid dynamics codes are discussed. Topics covered include: validating a code, validating a user, and calibrating a code. All results are presented in viewgraph format.
FY15 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Shemon, E. R.; Smith, M. A.
2015-09-30
This report summarizes the current status of NEAMS activities in FY2015. The tasks this year are (1) to improve solution methods for steady-state and transient conditions, (2) to develop features and user friendliness to increase the usability and applicability of the code, (3) to improve and verify the multigroup cross section generation scheme, (4) to perform verification and validation tests of the code using SFRs and thermal reactor cores, and (5) to support early users of PROTEUS and update the user manuals.
Neutron streaming studies along JET shielding penetrations
NASA Astrophysics Data System (ADS)
Stamatelatos, Ion E.; Vasilopoulou, Theodora; Batistoni, Paola; Obryk, Barbara; Popovichev, Sergey; Naish, Jonathan
2017-09-01
Neutronic benchmark experiments are carried out at JET aiming to assess the neutronic codes and data used in ITER analysis. Among other activities, experiments are performed in order to validate neutron streaming simulations along long penetrations in the JET shielding configuration. In this work, neutron streaming calculations along the JET personnel entrance maze are presented. Simulations were performed using the MCNP code for Deuterium-Deuterium and Deuterium- Tritium plasma sources. The results of the simulations were compared against experimental data obtained using thermoluminescence detectors and activation foils.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jun Soo; Choi, Yong Joon
The RELAP-7 code verification and validation activities are ongoing under the code assessment plan proposed in the previous document (INL-EXT-16-40015). Among the list of V&V test problems in the ‘RELAP-7 code V&V RTM (Requirements Traceability Matrix)’, the RELAP-7 7-equation model has been tested with additional demonstration problems and the results of these tests are reported in this document. In this report, we describe the testing process, the test cases that were conducted, and the results of the evaluation.
A Comprehensive High Performance Predictive Tool for Fusion Liquid Metal Hydromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Peter; Chhabra, Rupanshi; Munipalli, Ramakanth
In Phase I SBIR project, HyPerComp and Texcel initiated the development of two induction-based MHD codes as a predictive tool for fusion hydro-magnetics. The newly-developed codes overcome the deficiency of other MHD codes based on the quasi static approximation by defining a more general mathematical model that utilizes the induced magnetic field rather than the electric potential as the main electromagnetic variable. The UCLA code is a finite-difference staggered-mesh code that serves as a supplementary tool to the massively-parallel finite-volume code developed by HyPerComp. As there is no suitable experimental data under blanket-relevant conditions for code validation, code-to-code comparisons andmore » comparisons against analytical solutions were successfully performed for three selected test cases: (1) lid-driven MHD flow, (2) flow in a rectangular duct in a transverse magnetic field, and (3) unsteady finite magnetic Reynolds number flow in a rectangular enclosure. The performed tests suggest that the developed codes are accurate and robust. Further work will focus on enhancing the code capabilities towards higher flow parameters and faster computations. At the conclusion of the current Phase-II Project we have completed the preliminary validation efforts in performing unsteady mixed-convection MHD flows (against limited data that is currently available in literature), and demonstrated flow behavior in large 3D channels including important geometrical features. Code enhancements such as periodic boundary conditions, unmatched mesh structures are also ready. As proposed, we have built upon these strengths and explored a much increased range of Grashof numbers and Hartmann numbers under various flow conditions, ranging from flows in a rectangular duct to prototypic blanket modules and liquid metal PFC. Parametric studies, numerical and physical model improvements to expand the scope of simulations, code demonstration, and continued validation activities have also been completed.« less
Scherr, Karen A.; Fagerlin, Angela; Williamson, Lillie D.; Davis, J. Kelly; Fridman, Ilona; Atyeo, Natalie; Ubel, Peter A.
2016-01-01
Background Physicians’ recommendations affect patients’ treatment choices. However, most research relies on physicians’ or patients’ retrospective reports of recommendations, which offer a limited perspective and have limitations such as recall bias. Objective To develop a reliable and valid method to measure the strength of physician recommendations using direct observation of clinical encounters. Methods Clinical encounters (n = 257) were recorded as part of a larger study of prostate cancer decision making. We used an iterative process to create the 5-point Physician Recommendation Coding System (PhyReCS). To determine reliability, research assistants double-coded 50 transcripts. To establish content validity, we used one-way ANOVAs to determine whether relative treatment recommendation scores differed as a function of which treatment patients received. To establish concurrent validity, we examined whether patients’ perceived treatment recommendations matched our coded recommendations. Results The PhyReCS was highly reliable (Krippendorf’s alpha =. 89, 95% CI [.86, .91]). The average relative treatment recommendation score for each treatment was higher for individuals who received that particular treatment. For example, the average relative surgery recommendation score was higher for individuals who received surgery versus radiation (mean difference = .98, SE = .18, p < .001) or active surveillance (mean difference = 1.10, SE = .14, p < .001). Patients’ perceived recommendations matched coded recommendations 81% of the time. Conclusion The PhyReCS is a reliable and valid way to capture the strength of physician recommendations. We believe that the PhyReCS would be helpful for other researchers who wish to study physician recommendations, an important part of patient decision making. PMID:27343015
Scherr, Karen A; Fagerlin, Angela; Williamson, Lillie D; Davis, J Kelly; Fridman, Ilona; Atyeo, Natalie; Ubel, Peter A
2017-01-01
Physicians' recommendations affect patients' treatment choices. However, most research relies on physicians' or patients' retrospective reports of recommendations, which offer a limited perspective and have limitations such as recall bias. To develop a reliable and valid method to measure the strength of physician recommendations using direct observation of clinical encounters. Clinical encounters (n = 257) were recorded as part of a larger study of prostate cancer decision making. We used an iterative process to create the 5-point Physician Recommendation Coding System (PhyReCS). To determine reliability, research assistants double-coded 50 transcripts. To establish content validity, we used 1-way analyses of variance to determine whether relative treatment recommendation scores differed as a function of which treatment patients received. To establish concurrent validity, we examined whether patients' perceived treatment recommendations matched our coded recommendations. The PhyReCS was highly reliable (Krippendorf's alpha = 0.89, 95% CI [0.86, 0.91]). The average relative treatment recommendation score for each treatment was higher for individuals who received that particular treatment. For example, the average relative surgery recommendation score was higher for individuals who received surgery versus radiation (mean difference = 0.98, SE = 0.18, P < 0.001) or active surveillance (mean difference = 1.10, SE = 0.14, P < 0.001). Patients' perceived recommendations matched coded recommendations 81% of the time. The PhyReCS is a reliable and valid way to capture the strength of physician recommendations. We believe that the PhyReCS would be helpful for other researchers who wish to study physician recommendations, an important part of patient decision making. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru
We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less
Schmithorst, Vincent J; Brown, Rhonda Douglas
2004-07-01
The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.
Measuring Homework Completion in Behavioral Activation
ERIC Educational Resources Information Center
Busch, Andrew M.; Uebelacker, Lisa A.; Kalibatseva, Zornitsa; Miller, Ivan W.
2010-01-01
The aim of this study was to develop and validate an observer-based coding system for the characterization and completion of homework assignments during Behavioral Activation (BA). Existing measures of homework completion are generally unsophisticated, and there is no current measure of homework completion designed to capture the particularities…
Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's Mini-Sentinel pilot program initially aimed to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of pulmonary fibrosis and interstitial lung disease. PubMed and Iowa Drug Information Service Web searches were conducted to identify citations applicable to the pulmonary fibrosis/interstitial lung disease HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify pulmonary fibrosis and interstitial lung disease, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on pulmonary fibrosis and interstitial lung disease algorithms and validation estimates. Only five studies provided codes; none provided validation estimates. Because interstitial lung disease includes a broad spectrum of diseases, including pulmonary fibrosis, the scope of these studies varied, as did the corresponding diagnostic codes used. Research needs to be conducted on designing validation studies to test pulmonary fibrosis and interstitial lung disease algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Preliminary Assessment of Turbomachinery Codes
NASA Technical Reports Server (NTRS)
Mazumder, Quamrul H.
2007-01-01
This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.
STAR-CCM+ Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David
2016-09-30
The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methodsmore » (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ditmars, J.D.; Walbridge, E.W.; Rote, D.M.
1983-10-01
Repository performance assessment is analysis that identifies events and processes that might affect a repository system for isolation of radioactive waste, examines their effects on barriers to waste migration, and estimates the probabilities of their occurrence and their consequences. In 1983 Battelle Memorial Institute's Office of Nuclear Waste Isolation (ONWI) prepared two plans - one for performance assessment for a waste repository in salt and one for verification and validation of performance assessment technology. At the request of the US Department of Energy's Salt Repository Project Office (SRPO), Argonne National Laboratory reviewed those plans and prepared this report to advisemore » SRPO of specific areas where ONWI's plans for performance assessment might be improved. This report presents a framework for repository performance assessment that clearly identifies the relationships among the disposal problems, the processes underlying the problems, the tools for assessment (computer codes), and the data. In particular, the relationships among important processes and 26 model codes available to ONWI are indicated. A common suggestion for computer code verification and validation is the need for specific and unambiguous documentation of the results of performance assessment activities. A major portion of this report consists of status summaries of 27 model codes indicated as potentially useful by ONWI. The code summaries focus on three main areas: (1) the code's purpose, capabilities, and limitations; (2) status of the elements of documentation and review essential for code verification and validation; and (3) proposed application of the code for performance assessment of salt repository systems. 15 references, 6 figures, 4 tables.« less
Activity-Dependent Human Brain Coding/Noncoding Gene Regulatory Networks
Lipovich, Leonard; Dachet, Fabien; Cai, Juan; Bagla, Shruti; Balan, Karina; Jia, Hui; Loeb, Jeffrey A.
2012-01-01
While most gene transcription yields RNA transcripts that code for proteins, a sizable proportion of the genome generates RNA transcripts that do not code for proteins, but may have important regulatory functions. The brain-derived neurotrophic factor (BDNF) gene, a key regulator of neuronal activity, is overlapped by a primate-specific, antisense long noncoding RNA (lncRNA) called BDNFOS. We demonstrate reciprocal patterns of BDNF and BDNFOS transcription in highly active regions of human neocortex removed as a treatment for intractable seizures. A genome-wide analysis of activity-dependent coding and noncoding human transcription using a custom lncRNA microarray identified 1288 differentially expressed lncRNAs, of which 26 had expression profiles that matched activity-dependent coding genes and an additional 8 were adjacent to or overlapping with differentially expressed protein-coding genes. The functions of most of these protein-coding partner genes, such as ARC, include long-term potentiation, synaptic activity, and memory. The nuclear lncRNAs NEAT1, MALAT1, and RPPH1, composing an RNAse P-dependent lncRNA-maturation pathway, were also upregulated. As a means to replicate human neuronal activity, repeated depolarization of SY5Y cells resulted in sustained CREB activation and produced an inverse pattern of BDNF-BDNFOS co-expression that was not achieved with a single depolarization. RNAi-mediated knockdown of BDNFOS in human SY5Y cells increased BDNF expression, suggesting that BDNFOS directly downregulates BDNF. Temporal expression patterns of other lncRNA-messenger RNA pairs validated the effect of chronic neuronal activity on the transcriptome and implied various lncRNA regulatory mechanisms. lncRNAs, some of which are unique to primates, thus appear to have potentially important regulatory roles in activity-dependent human brain plasticity. PMID:22960213
Synchrotron characterization of nanograined UO 2 grain growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Kun; Miao, Yinbin; Yun, Di
2015-09-30
This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less
Supplying materials needed for grain growth characterizations of nano-grained UO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Kun; Miao, Yinbin; Yun, Di
2015-09-30
This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less
Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude
2017-12-01
This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.
Development of code evaluation criteria for assessing predictive capability and performance
NASA Technical Reports Server (NTRS)
Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.
1993-01-01
Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.
Transportation fuel research and development : statistically validated codes and standards
DOT National Transportation Integrated Search
2007-08-28
The recent establishment of the National University Transportation Center at MST under the "Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users," expands the research and education activities to include alternative tr...
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
DualSPHysics: A numerical tool to simulate real breakwaters
NASA Astrophysics Data System (ADS)
Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho
2018-02-01
The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.
On the validation of a code and a turbulence model appropriate to circulation control airfoils
NASA Technical Reports Server (NTRS)
Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.
1988-01-01
A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.
Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's (FDA) Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of erythema multiforme and related conditions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the erythema multiforme HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles that used administrative and claims data to identify erythema multiforme, Stevens-Johnson syndrome, or toxic epidermal necrolysis and that included validation estimates of the coding algorithms. Our search revealed limited literature focusing on erythema multiforme and related conditions that provided administrative and claims data-based algorithms and validation estimates. Only four studies provided validated algorithms and all studies used the same International Classification of Diseases code, 695.1. Approximately half of cases subjected to expert review were consistent with erythema multiforme and related conditions. Updated research needs to be conducted on designing validation studies that test algorithms for erythema multiforme and related conditions and that take into account recent changes in the diagnostic coding of these diseases. Copyright © 2012 John Wiley & Sons, Ltd.
A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, Ronald
My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less thanmore » half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satyapal, Sunita
The 2011 Annual Progress Report summarizes fiscal year 2011 activities and accomplishments by projects funded by the DOE Hydrogen Program. It covers the program areas of hydrogen production and delivery; hydrogen storage; fuel cells; manufacturing; technology validation; safety, codes and standards; education; market transformation; and systems analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
The 2013 Annual Progress Report summarizes fiscal year 2013 activities and accomplishments by projects funded by the DOE Hydrogen Program. It covers the program areas of hydrogen production and delivery; hydrogen storage; fuel cells; manufacturing; technology validation; safety, codes and standards; market transformation; and systems analysis.
Translating expert system rules into Ada code with validation and verification
NASA Technical Reports Server (NTRS)
Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam
1991-01-01
The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Di; Mo, Kun; Ye, Bei
2015-09-30
This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL). Two major accomplishments in FY 15 are summarized in this report: (1) implementation of the FASTGRASS module in the BISON code; and (2) a Xe implantation experiment for large-grained UO 2. Both BISON AND MARMOT codes have been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. To contribute to the development of the Moose-Bison-Marmot (MBM) code suite, we have implemented the FASTGRASS fission gas model as a module inmore » the BISON code. Based on rate theory formulations, the coupled FASTGRASS module in BISON is capable of modeling LWR oxide fuel fission gas behavior and fission gas release. In addition, we conducted a Xe implantation experiment at the Argonne Tandem Linac Accelerator System (ATLAS) in order to produce the needed UO 2 samples with desired bubble morphology. With these samples, further experiments to study the fission gas diffusivity are planned to provide validation data for the Fission Gas Release Model in MARMOT codes.« less
DSMC Simulations of Hypersonic Flows and Comparison With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.
2004-01-01
This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.
2014-03-27
VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR... PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR MEASUREMENTS OF AN IRON BOX THESIS Presented to the Faculty Department of Engineering...STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED iv AFIT-ENP-14-M-05 VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6
NASA Technical Reports Server (NTRS)
Griffin, Lisa W.; Huber, Frank W.
1992-01-01
The current status of the activities and future plans of the Turbine Technology Team of the Consortium for Computational Fluid Dynamics is reviewed. The activities of the Turbine Team focus on developing and enhancing codes and models, obtaining data for code validation and general understanding of flows through turbines, and developing and analyzing the aerodynamic designs of turbines suitable for use in the Space Transportation Main Engine fuel and oxidizer turbopumps. Future work will include the experimental evaluation of the oxidizer turbine configuration, the development, analysis, and experimental verification of concepts to control secondary and tip losses, and the aerodynamic design, analysis, and experimental evaluation of turbine volutes.
A test of the validity of the motivational interviewing treatment integrity code.
Forsberg, Lars; Berman, Anne H; Kallmén, Håkan; Hermansson, Ulric; Helgason, Asgeir R
2008-01-01
To evaluate the Swedish version of the Motivational Interviewing Treatment Code (MITI), MITI coding was applied to tape-recorded counseling sessions. Construct validity was assessed using factor analysis on 120 MITI-coded sessions. Discriminant validity was assessed by comparing MITI coding of motivational interviewing (MI) sessions with information- and advice-giving sessions as well as by comparing MI-trained practitioners with untrained practitioners. A principal-axis factoring analysis yielded some evidence for MITI construct validity. MITI differentiated between practitioners with different levels of MI training as well as between MI practitioners and advice-giving counselors, thus supporting discriminant validity. MITI may be used as a training tool together with supervision to confirm and enhance MI practice in clinical settings. MITI can also serve as a tool for evaluating MI integrity in clinical research.
Radio controlled release apparatus for animal data acquisition devices
Stamps, James Frederick
2000-01-01
A novel apparatus for reliably and selectively releasing a data acquisition package from an animal for recovery. The data package comprises two parts: 1) an animal data acquisition device and 2) a co-located release apparatus. One embodiment, which is useful for land animals, the release apparatus includes two major components: 1) an electronics package, comprising a receiver; a decoder comparator, having at plurality of individually selectable codes; and an actuator circuit and 2) a release device, which can be a mechanical device, which acts to release the data package from the animal. To release a data package from a particular animal, a radio transmitter sends a coded signal which is decoded to determine if the code is valid for that animal data package. Having received a valid code, the release device is activated to release the data package from the animal for subsequent recovery. A second embodiment includes floatation means and is useful for releasing animal data acquisition devices attached to sea animals. This embodiment further provides for releasing a data package underwater by employing an acoustic signal.
NASA Technical Reports Server (NTRS)
Cornford, S.; Gibbel, M.
1997-01-01
NASA's Code QT Test Effectiveness Program is funding a series of applied research activities focused on utilizing the principles of physics and engineering of failure and those of engineering economics to assess and improve the value-added by the various validation and verification activities to organizations.
Myers, Beth M; Wells, Nancy M
2015-04-01
Gardens are a promising intervention to promote physical activity (PA) and foster health. However, because of the unique characteristics of gardening, no extant tool can capture PA, postures, and motions that take place in a garden. The Physical Activity Research and Assessment tool for Garden Observation (PARAGON) was developed to assess children's PA levels, tasks, postures, and motions, associations, and interactions while gardening. PARAGON uses momentary time sampling in which a trained observer watches a focal child for 15 seconds and then records behavior for 15 seconds. Sixty-five children (38 girls, 27 boys) at 4 elementary schools in New York State were observed over 8 days. During the observation, children simultaneously wore Actigraph GT3X+ accelerometers. The overall interrater reliability was 88% agreement, and Ebel was .97. Percent agreement values for activity level (93%), garden tasks (93%), motions (80%), associations (95%), and interactions (91%) also met acceptable criteria. Validity was established by previously validated PA codes and by expected convergent validity with accelerometry. PARAGON is a valid and reliable observation tool for assessing children's PA in the context of gardening.
Statistical inference of static analysis rules
NASA Technical Reports Server (NTRS)
Engler, Dawson Richards (Inventor)
2009-01-01
Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.
NASA Technical Reports Server (NTRS)
Borne, Kirk D.
2004-01-01
We completed this project by developing the new galaxy-galaxy collision simulation code that we originally proposed. We included star formation heuristically, along with gas recycling and energetic feedback into the interstellar medium of the galaxies. We ran several test simulations. And we finally validated the code by running models to emulate the double-ring galaxy AM 0644-741. Most of the exotic and unique features of this collisional ring galaxy were matched by our models, thereby validating our algorithm, which may now be used by us and by the community for additional dynamic studies of galaxies. A paper has been submitted to the Astrophysical Journal describing our algorithm and the results of the AM 0644-741 modeling.
Pretest aerosol code comparisons for LWR aerosol containment tests LA1 and LA2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, A.L.; Wilson, J.H.; Arwood, P.C.
The Light-Water-Reactor (LWR) Aerosol Containment Experiments (LACE) are being performed in Richland, Washington, at the Hanford Engineering Development Laboratory (HEDL) under the leadership of an international project board and the Electric Power Research Institute. These tests have two objectives: (1) to investigate, at large scale, the inherent aerosol retention behavior in LWR containments under simulated severe accident conditions, and (2) to provide an experimental data base for validating aerosol behavior and thermal-hydraulic computer codes. Aerosol computer-code comparison activities are being coordinated at the Oak Ridge National Laboratory. For each of the six LACE tests, ''pretest'' calculations (for code-to-code comparisons) andmore » ''posttest'' calculations (for code-to-test data comparisons) are being performed. The overall goals of the comparison effort are (1) to provide code users with experience in applying their codes to LWR accident-sequence conditions and (2) to evaluate and improve the code models.« less
Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images
2009-12-01
Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design
Springate, David A; Kontopantelis, Evangelos; Ashcroft, Darren M; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David
2014-01-01
Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects.
Springate, David A.; Kontopantelis, Evangelos; Ashcroft, Darren M.; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David
2014-01-01
Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects. PMID:24941260
Illum, Niels Ove; Gradel, Kim Oren
2017-01-01
To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were -1.10 (range: -5.31 to 5.25) and -1.11 (range: -5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions, and, finally, codes for more complex functions. Parents can assess their own children in a valid and reliable way, and if the WHO ICF-CY second-level code data set is functioning in a clinically sound way, it can be employed as a tool for identifying the severity of disabilities and for monitoring changes in those disabilities over time. The ICF-CY codes selected in this study might be one cornerstone in forming a national or even international generic set of ICF-CY codes for the benefit of children with disabilities, their parents, and caregivers and for the whole community supporting with children with disabilities on a daily and perpetual basis.
Illum, Niels Ove; Gradel, Kim Oren
2017-01-01
AIM To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. METHOD Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. RESULTS The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were −1.10 (range: −5.31 to 5.25) and −1.11 (range: −5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions, and, finally, codes for more complex functions. CONCLUSIONS Parents can assess their own children in a valid and reliable way, and if the WHO ICF-CY second-level code data set is functioning in a clinically sound way, it can be employed as a tool for identifying the severity of disabilities and for monitoring changes in those disabilities over time. The ICF-CY codes selected in this study might be one cornerstone in forming a national or even international generic set of ICF-CY codes for the benefit of children with disabilities, their parents, and caregivers and for the whole community supporting with children with disabilities on a daily and perpetual basis. PMID:28680270
Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
2015-02-01
The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less
Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.; Flores, Jolen
1989-01-01
Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
NASA Astrophysics Data System (ADS)
Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime
2017-09-01
After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.
Independent Validation and Verification of automated information systems in the Department of Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunteman, W.J.; Caldwell, R.
1994-07-01
The Department of Energy (DOE) has established an Independent Validation and Verification (IV&V) program for all classified automated information systems (AIS) operating in compartmented or multi-level modes. The IV&V program was established in DOE Order 5639.6A and described in the manual associated with the Order. This paper describes the DOE IV&V program, the IV&V process and activities, the expected benefits from an IV&V, and the criteria and methodologies used during an IV&V. The first IV&V under this program was conducted on the Integrated Computing Network (ICN) at Los Alamos National Laboratory and several lessons learned are presented. The DOE IV&Vmore » program is based on the following definitions. An IV&V is defined as the use of expertise from outside an AIS organization to conduct validation and verification studies on a classified AIS. Validation is defined as the process of applying the specialized security test and evaluation procedures, tools, and equipment needed to establish acceptance for joint usage of an AIS by one or more departments or agencies and their contractors. Verification is the process of comparing two levels of an AIS specification for proper correspondence (e.g., security policy model with top-level specifications, top-level specifications with source code, or source code with object code).« less
Assessing Attachment in Psychotherapy: Validation of the Patient Attachment Coding System (PACS).
Talia, Alessandro; Miller-Bottome, Madeleine; Daniel, Sarah I F
2017-01-01
The authors present and validate the Patient Attachment Coding System (PACS), a transcript-based instrument that assesses clients' in-session attachment based on any session of psychotherapy, in multiple treatment modalities. One-hundred and sixty clients in different types of psychotherapy (cognitive-behavioural, cognitive-behavioural-enhanced, psychodynamic, relational, supportive) and from three different countries were administered the Adult Attachment Interview (AAI) prior to treatment, and one session for each client was rated with the PACS by independent coders. Results indicate strong inter-rater reliability, and high convergent validity of the PACS scales and classifications with the AAI. These results present the PACS as a practical alternative to the AAI in psychotherapy research and suggest that clinicians using the PACS can assess clients' attachment status on an ongoing basis by monitoring clients' verbal activity. These results also provide information regarding the ways in which differences in attachment status play out in therapy sessions and further the study of attachment in psychotherapy from a pre-treatment client factor to a process variable. Copyright © 2015 John Wiley & Sons, Ltd. The Patient Attachment Coding System is a valid measure of attachment that can classify clients' attachment based on any single psychotherapy transcript, in many therapeutic modalities Client differences in attachment manifest in part independently of the therapist's contributions Client adult attachment patterns are likely to affect psychotherapeutic processes. Copyright © 2015 John Wiley & Sons, Ltd.
Coupled CFD/CSD Analysis of an Active-Twist Rotor in a Wind Tunnel with Experimental Validation
NASA Technical Reports Server (NTRS)
Massey, Steven J.; Kreshock, Andrew R.; Sekula, Martin K.
2015-01-01
An unsteady Reynolds averaged Navier-Stokes analysis loosely coupled with a comprehensive rotorcraft code is presented for a second-generation active-twist rotor. High fidelity Navier-Stokes results for three configurations: an isolated rotor, a rotor with fuselage, and a rotor with fuselage mounted in a wind tunnel, are compared to lifting-line theory based comprehensive rotorcraft code calculations and wind tunnel data. Results indicate that CFD/CSD predictions of flapwise bending moments are in good agreement with wind tunnel measurements for configurations with a fuselage, and that modeling the wind tunnel environment does not significantly enhance computed results. Actuated rotor results for the rotor with fuselage configuration are also validated for predictions of vibratory blade loads and fixed-system vibratory loads. Varying levels of agreement with wind tunnel measurements are observed for blade vibratory loads, depending on the load component (flap, lag, or torsion) and the harmonic being examined. Predicted trends in fixed-system vibratory loads are in good agreement with wind tunnel measurements.
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Activation of accelerator construction materials by heavy ions
NASA Astrophysics Data System (ADS)
Katrík, P.; Mustafin, E.; Hoffmann, D. H. H.; Pavlovič, M.; Strašík, I.
2015-12-01
Activation data for an aluminum target irradiated by 200 MeV/u 238U ion beam are presented in the paper. The target was irradiated in the stacked-foil geometry and analyzed using gamma-ray spectroscopy. The purpose of the experiment was to study the role of primary particles, projectile fragments, and target fragments in the activation process using the depth profiling of residual activity. The study brought information on which particles contribute dominantly to the target activation. The experimental data were compared with the Monte Carlo simulations by the FLUKA 2011.2c.0 code. This study is a part of a research program devoted to activation of accelerator construction materials by high-energy (⩾200 MeV/u) heavy ions at GSI Darmstadt. The experimental data are needed to validate the computer codes used for simulation of interaction of swift heavy ions with matter.
Woon, Yuan-Liang; Lee, Keng-Yee; Mohd Anuar, Siti Fatimah Zahra; Goh, Pik-Pin; Lim, Teck-Onn
2018-04-20
Hospitalization due to dengue illness is an important measure of dengue morbidity. However, limited studies are based on administrative database because the validity of the diagnosis codes is unknown. We validated the International Classification of Diseases, 10th revision (ICD) diagnosis coding for dengue infections in the Malaysian Ministry of Health's (MOH) hospital discharge database. This validation study involves retrospective review of available hospital discharge records and hand-search medical records for years 2010 and 2013. We randomly selected 3219 hospital discharge records coded with dengue and non-dengue infections as their discharge diagnoses from the national hospital discharge database. We then randomly sampled 216 and 144 records for patients with and without codes for dengue respectively, in keeping with their relative frequency in the MOH database, for chart review. The ICD codes for dengue were validated against lab-based diagnostic standard (NS1 or IgM). The ICD-10-CM codes for dengue had a sensitivity of 94%, modest specificity of 83%, positive predictive value of 87% and negative predictive value 92%. These results were stable between 2010 and 2013. However, its specificity decreased substantially when patients manifested with bleeding or low platelet count. The diagnostic performance of the ICD codes for dengue in the MOH's hospital discharge database is adequate for use in health services research on dengue.
2009 Annual Progress Report: DOE Hydrogen Program, November 2009 (Book)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2009-11-01
This report summarizes the hydrogen and fuel cell R&D activities and accomplishments of the DOE Hydrogen Program for FY2009. It covers the program areas of hydrogen production and delivery; fuel cells; manufacturing; technology validation; safety, codes and standards; education; and systems analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; Gough, Sean T.
This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.
Zhu, Shiyou; Li, Wei; Liu, Jingze; Chen, Chen-Hao; Liao, Qi; Xu, Ping; Xu, Han; Xiao, Tengfei; Cao, Zhongzheng; Peng, Jingyu; Yuan, Pengfei; Brown, Myles; Liu, Xiaole Shirley; Wei, Wensheng
2017-01-01
CRISPR/Cas9 screens have been widely adopted to analyse coding gene functions, but high throughput screening of non-coding elements using this method is more challenging, because indels caused by a single cut in non-coding regions are unlikely to produce a functional knockout. A high-throughput method to produce deletions of non-coding DNA is needed. Herein, we report a high throughput genomic deletion strategy to screen for functional long non-coding RNAs (lncRNAs) that is based on a lentiviral paired-guide RNA (pgRNA) library. Applying our screening method, we identified 51 lncRNAs that can positively or negatively regulate human cancer cell growth. We individually validated 9 lncRNAs using CRISPR/Cas9-mediated genomic deletion and functional rescue, CRISPR activation or inhibition, and gene expression profiling. Our high-throughput pgRNA genome deletion method should enable rapid identification of functional mammalian non-coding elements. PMID:27798563
Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.
2012-01-01
Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123
Jones, B E; South, B R; Shao, Y; Lu, C C; Leng, J; Sauer, B C; Gundlapalli, A V; Samore, M H; Zeng, Q
2018-01-01
Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes. This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia. Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets. Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80). System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty. Schattauer GmbH Stuttgart.
Irma 5.1 multisensor signature prediction model
NASA Astrophysics Data System (ADS)
Savage, James; Coker, Charles; Edwards, Dave; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles
2006-05-01
The Irma synthetic signature prediction code is being developed to facilitate the research and development of multi-sensor systems. Irma was one of the first high resolution, physics-based Infrared (IR) target and background signature models to be developed for tactical weapon applications. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel, and a Doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Irma is currently used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. In 2005, Irma version 5.1 was released to the community. In addition to upgrading the Ladar channel code to an object oriented language (C++) and providing a new graphical user interface to construct scenes, this new release significantly improves the modeling of the ladar channel and includes polarization effects, time jittering, speckle effect, and atmospheric turbulence. More importantly, the Munitions Directorate has funded three field tests to verify and validate the re-engineered ladar channel. Each of the field tests was comprehensive and included one month of sensor characterization and a week of data collection. After each field test, the analysis included comparisons of Irma predicted signatures with measured signatures, and if necessary, refining the model to produce realistic imagery. This paper will focus on two areas of the Irma 5.1 development effort: report on the analysis results of the validation and verification of the Irma 5.1 ladar channel, and the software development plan and validation efforts of the Irma passive channel. As scheduled, the Irma passive code is being re-engineered using object oriented language (C++), and field data collection is being conducted to validate the re-engineered passive code. This software upgrade will remove many constraints and limitations of the legacy code including limits on image size and facet counts. The field test to validate the passive channel is expected to be complete in the second quarter of 2006.
The Activities of the European Consortium on Nuclear Data Development and Analysis for Fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, U., E-mail: ulrich.fischer@kit.edu; Avrigeanu, M.; Avrigeanu, V.
This paper presents an overview of the activities of the European Consortium on Nuclear Data Development and Analysis for Fusion. The Consortium combines available European expertise to provide services for the generation, maintenance, and validation of nuclear data evaluations and data files relevant for ITER, IFMIF and DEMO, as well as codes and software tools required for related nuclear calculations.
Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview
NASA Technical Reports Server (NTRS)
Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard
1996-01-01
The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'
NASA Astrophysics Data System (ADS)
Zhou, Abel; White, Graeme L.; Davidson, Rob
2018-02-01
Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.
Overview of Edge Simulation Laboratory (ESL)
NASA Astrophysics Data System (ADS)
Cohen, R. H.; Dorr, M.; Hittinger, J.; Rognlien, T.; Umansky, M.; Xiong, A.; Xu, X.; Belli, E.; Candy, J.; Snyder, P.; Colella, P.; Martin, D.; Sternberg, T.; van Straalen, B.; Bodi, K.; Krasheninnikov, S.
2006-10-01
The ESL is a new collaboration to build a full-f electromagnetic gyrokinetic code for tokamak edge plasmas using continuum methods. Target applications are edge turbulence and transport (neoclassical and anomalous), and edge-localized modes. Initially the project has three major threads: (i) verification and validation of TEMPEST, the project's initial (electrostatic) edge code which can be run in 4D (neoclassical and transport-timescale applications) or 5D (turbulence); (ii) design of the next generation code, which will include more complete physics (electromagnetics, fluid equation option, improved collisions) and advanced numerics (fully conservative, high-order discretization, mapped multiblock grids, adaptivity), and (iii) rapid-prototype codes to explore the issues attached to solving fully nonlinear gyrokinetics with steep radial gradiens. We present a brief summary of the status of each of these activities.
Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code
NASA Technical Reports Server (NTRS)
Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William
2006-01-01
The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.
CFD validation needs for advanced concepts at Northrop Corporation
NASA Technical Reports Server (NTRS)
George, Michael W.
1987-01-01
Information is given in viewgraph form on the Computational Fluid Dynamics (CFD) Workshop held July 14 - 16, 1987. Topics covered include the philosophy of CFD validation, current validation efforts, the wing-body-tail Euler code, F-20 Euler simulated oil flow, and Euler Navier-Stokes code validation for 2D and 3D nozzle afterbody applications.
2014 Annual Progress Report: DOE Hydrogen and Fuel Cells Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2014-11-01
The 2014 Annual Progress Report summarizes fiscal year 2014 activities and accomplishments by projects funded by the DOE Hydrogen Program. It covers the program areas of hydrogen production and delivery; hydrogen storage; fuel cells; manufacturing; technology validation; safety, codes and standards; market transformation; and systems analysis.
Comparison of codes assessing galactic cosmic radiation exposure of aircraft crew.
Bottollier-Depois, J F; Beck, P; Bennett, B; Bennett, L; Bütikofer, R; Clairand, I; Desorgher, L; Dyer, C; Felsberger, E; Flückiger, E; Hands, A; Kindl, P; Latocha, M; Lewis, B; Leuthold, G; Maczka, T; Mares, V; McCall, M J; O'Brien, K; Rollet, S; Rühm, W; Wissmann, F
2009-10-01
The assessment of the exposure to cosmic radiation onboard aircraft is one of the preoccupations of bodies responsible for radiation protection. Cosmic particle flux is significantly higher onboard aircraft than at ground level and its intensity depends on the solar activity. The dose is usually estimated using codes validated by the experimental data. In this paper, a comparison of various codes is presented, some of them are used routinely, to assess the dose received by the aircraft crew caused by the galactic cosmic radiation. Results are provided for periods close to solar maximum and minimum and for selected flights covering major commercial routes in the world. The overall agreement between the codes, particularly for those routinely used for aircraft crew dosimetry, was better than +/-20 % from the median in all but two cases. The agreement within the codes is considered to be fully satisfactory for radiation protection purposes.
The Modified Cognitive Constructions Coding System: Reliability and Validity Assessments
ERIC Educational Resources Information Center
Moran, Galia S.; Diamond, Gary M.
2006-01-01
The cognitive constructions coding system (CCCS) was designed for coding client's expressed problem constructions on four dimensions: intrapersonal-interpersonal, internal-external, responsible-not responsible, and linear-circular. This study introduces, and examines the reliability and validity of, a modified version of the CCCS--a version that…
HBOI Underwater Imaging and Communication Research - Phase 1
2012-04-19
validation of one-way pulse stretching radiative transfer code The objective was to develop and validate time-resolved radiative transfer models that...and validation of one-way pulse stretching radiative transfer code The models were subjected to a series of validation experiments over 12.5 meter...about the theoretical basis of the model together with validation results can be found in Dalgleish et al., (20 1 0). Forward scattering Mueller
Improvements in the simulation code of the SOX experiment
NASA Astrophysics Data System (ADS)
Caminata, A.; Agostini, M.; Altenmüeller, K.; Appel, S.; Atroshchenko, V.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; Cribier, M.; D'Angelo, D.; Davini, S.; Derbin, A.; Di Noto, L.; Drachnev, I.; Durero, M.; Etenko, A.; Farinon, S.; Fischer, V.; Fomenko, K.; Franco, D.; Gabriele, F.; Gaffiot, J.; Galbiati, C.; Gschwender, M.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, Th.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jonquères, N.; Jany, A.; Jedrzejczak, K.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kornoukhov, V.; Kryn, D.; Lachenmaier, T.; Lasserre, T.; Laubenstein, M.; Lehnert, B.; Link, J.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Manuzio, G.; Marcocci, S.; Maricic, J.; Mention, G.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Mosteiro, P.; Muratova, V.; Musenich, R.; Neumair, B.; Oberauer, L.; Obolensky, M.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Scola, L.; Semenov, D.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Veyssiére, C.; Vishneva, A.; Vivier, M.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Winter, J.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.
2017-09-01
The aim of the SOX experiment is to test the hypothesis of existence of light sterile neutrinos trough a short baseline experiment. Electron antineutrinos will be produced by an high activity source and detected in the Borexino experiment. Both an oscillometry approach and a conventional disappearance analysis will be performed and, if combined, SOX will be able to investigate most of the anomaly region at 95% c.l. This paper focuses on the improvements performed on the simulation code and on the techniques (calibrations) used to validate the results.
A simplified model for tritium permeation transient predictions when trapping is active*1
NASA Astrophysics Data System (ADS)
Longhurst, G. R.
1994-09-01
This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.
Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN
NASA Technical Reports Server (NTRS)
Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.
1996-01-01
A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.
An RL10A-3-3A rocket engine model using the rocket engine transient simulator (ROCETS) software
NASA Technical Reports Server (NTRS)
Binder, Michael
1993-01-01
Steady-state and transient computer models of the RL10A-3-3A rocket engine have been created using the Rocket Engine Transient Simulation (ROCETS) code. These models were created for several purposes. The RL10 engine is a critical component of past, present, and future space missions; the model will give NASA an in-house capability to simulate the performance of the engine under various operating conditions and mission profiles. The RL10 simulation activity is also an opportunity to further validate the ROCETS program. The ROCETS code is an important tool for modeling rocket engine systems at NASA Lewis. ROCETS provides a modular and general framework for simulating the steady-state and transient behavior of any desired propulsion system. Although the ROCETS code is being used in a number of different analysis and design projects within NASA, it has not been extensively validated for any system using actual test data. The RL10A-3-3A has a ten year history of test and flight applications; it should provide sufficient data to validate the ROCETS program capability. The ROCETS models of the RL10 system were created using design information provided by Pratt & Whitney, the engine manufacturer. These models are in the process of being validated using test-stand and flight data. This paper includes a brief description of the models and comparison of preliminary simulation output against flight and test-stand data.
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling
Lareo, Angel; Forlim, Caroline G.; Pinto, Reynaldo D.; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox. PMID:27766078
The evolution of the genetic code: Impasses and challenges.
Kun, Ádám; Radványi, Ádám
2018-02-01
The origin of the genetic code and translation is a "notoriously difficult problem". In this survey we present a list of questions that a full theory of the genetic code needs to answer. We assess the leading hypotheses according to these criteria. The stereochemical, the coding coenzyme handle, the coevolution, the four-column theory, the error minimization and the frozen accident hypotheses are discussed. The integration of these hypotheses can account for the origin of the genetic code. But experiments are badly needed. Thus we suggest a host of experiments that could (in)validate some of the models. We focus especially on the coding coenzyme handle hypothesis (CCH). The CCH suggests that amino acids attached to RNA handles enhanced catalytic activities of ribozymes. Alternatively, amino acids without handles or with a handle consisting of a single adenine, like in contemporary coenzymes could have been employed. All three scenarios can be tested in in vitro compartmentalized systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling.
Lareo, Angel; Forlim, Caroline G; Pinto, Reynaldo D; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox.
Monte Carol-based validation of neutronic methodology for EBR-II analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, J.R.; Finck, P.J.
1993-01-01
The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less
PCC Framework for Program-Generators
NASA Technical Reports Server (NTRS)
Kong, Soonho; Choi, Wontae; Yi, Kwangkeun
2009-01-01
In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.
Additional extensions to the NASCAP computer code, volume 2
NASA Technical Reports Server (NTRS)
Stannard, P. R.; Katz, I.; Mandell, M. J.
1982-01-01
Particular attention is given to comparison of the actural response of the SCATHA (Spacecraft Charging AT High Altitudes) P78-2 satellite with theoretical (NASCAP) predictions. Extensive comparisons for a variety of environmental conditions confirm the validity of the NASCAP model. A summary of the capabilities and range of validity of NASCAP is presented, with extensive reference to previously published applications. It is shown that NASCAP is capable of providing quantitatively accurate results when the object and environment are adequately represented and fall within the range of conditions for which NASCAP was intended. Three dimensional electric field affects play an important role in determining the potential of dielectric surfaces and electrically isolated conducting surfaces, particularly in the presence of artificially imposed high voltages. A theory for such phenomena is presented and applied to the active control experiments carried out in SCATHA, as well as other space and laboratory experiments. Finally, some preliminary work toward modeling large spacecraft in polar Earth orbit is presented. An initial physical model is presented including charge emission. A simple code based upon the model is described along with code test results.
WEC3: Wave Energy Converter Code Comparison Project: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
2015 Annual Progress Report: DOE Hydrogen and Fuel Cells Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The 2015 Annual Progress Report summarizes fiscal year 2015 activities and accomplishments by projects funded by the DOE Hydrogen and Fuel Cells Program. It covers the program areas of hydrogen production; hydrogen delivery; hydrogen storage; fuel cells; manufacturing R&D; technology validation; safety, codes and standards; systems analysis; and market transformation.
Methodology for the nuclear design validation of an Alternate Emergency Management Centre (CAGE)
NASA Astrophysics Data System (ADS)
Hueso, César; Fabbri, Marco; de la Fuente, Cristina; Janés, Albert; Massuet, Joan; Zamora, Imanol; Gasca, Cristina; Hernández, Héctor; Vega, J. Ángel
2017-09-01
The methodology is devised by coupling different codes. The study of weather conditions as part of the data of the site will determine the relative concentrations of radionuclides in the air using ARCON96. The activity in the air is characterized depending on the source and release sequence specified in NUREG-1465 by RADTRAD code, which provides results of the inner cloud source term contribution. Known activities, energy spectra are inferred using ORIGEN-S, which are used as input for the models of the outer cloud, filters and containment generated with MCNP5. The sum of the different contributions must meet the conditions of habitability specified by the CSN (Spanish Nuclear Regulatory Body) (TEDE <50 mSv and equivalent dose to the thyroid <500 mSv within 30 days following the accident doses) so that the dose is optimized by varying parameters such as CAGE location, flow filtering need for recirculation, thicknesses and compositions of the walls, etc. The results for the most penalizing area meet the established criteria, and therefore the CAGE building design based on the methodology presented is radiologically validated.
EASY-II Renaissance: n, p, d, α, γ-induced Inventory Code System
NASA Astrophysics Data System (ADS)
Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.
2014-04-01
The European Activation SYstem has been re-engineered and re-written in modern programming languages so as to answer today's and tomorrow's needs in terms of activation, transmutation, depletion, decay and processing of radioactive materials. The new FISPACT-II inventory code development project has allowed us to embed many more features in terms of energy range: up to GeV; incident particles: alpha, gamma, proton, deuteron and neutron; and neutron physics: self-shielding effects, temperature dependence and covariance, so as to cover all anticipated application needs: nuclear fission and fusion, accelerator physics, isotope production, stockpile and fuel cycle stewardship, materials characterization and life, and storage cycle management. In parallel, the maturity of modern, truly general purpose libraries encompassing thousands of target isotopes such as TENDL-2012, the evolution of the ENDF-6 format and the capabilities of the latest generation of processing codes PREPRO, NJOY and CALENDF have allowed the activation code to be fed with more robust, complete and appropriate data: cross sections with covariance, probability tables in the resonance ranges, kerma, dpa, gas and radionuclide production and 24 decay types. All such data for the five most important incident particles (n, p, d, α, γ), are placed in evaluated data files up to an incident energy of 200 MeV. The resulting code system, EASY-II is designed as a functional replacement for the previous European Activation System, EASY-2010. It includes many new features and enhancements, but also benefits already from the feedback from extensive validation and verification activities performed with its predecessor.
The Facial Expression Coding System (FACES): Development, Validation, and Utility
ERIC Educational Resources Information Center
Kring, Ann M.; Sloan, Denise M.
2007-01-01
This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…
Prediction of plant lncRNA by ensemble machine learning classifiers.
Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian
2018-05-02
In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.
Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of hypersensitivity reactions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the hypersensitivity reactions of health outcomes of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify hypersensitivity reactions and including validation estimates of the coding algorithms. We identified five studies that provided validated hypersensitivity-reaction algorithms. Algorithm positive predictive values (PPVs) for various definitions of hypersensitivity reactions ranged from 3% to 95%. PPVs were high (i.e. 90%-95%) when both exposures and diagnoses were very specific. PPV generally decreased when the definition of hypersensitivity was expanded, except in one study that used data mining methodology for algorithm development. The ability of coding algorithms to identify hypersensitivity reactions varied, with decreasing performance occurring with expanded outcome definitions. This examination of hypersensitivity-reaction coding algorithms provides an example of surveillance bias resulting from outcome definitions that include mild cases. Data mining may provide tools for algorithm development for hypersensitivity and other health outcomes. Research needs to be conducted on designing validation studies to test hypersensitivity-reaction algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Lee, Jin Hee; Hong, Ki Jeong; Kim, Do Kyun; Kwak, Young Ho; Jang, Hye Young; Kim, Hahn Bom; Noh, Hyun; Park, Jungho; Song, Bongkyu; Jung, Jae Yun
2013-12-01
A clinically sensible diagnosis grouping system (DGS) is needed for describing pediatric emergency diagnoses for research, medical resource preparedness, and making national policy for pediatric emergency medical care. The Pediatric Emergency Care Applied Research Network (PECARN) developed the DGS successfully. We developed the modified PECARN DGS based on the different pediatric population of South Korea and validated the system to obtain the accurate and comparable epidemiologic data of pediatric emergent conditions of the selected population. The data source used to develop and validate the modified PECARN DGS was the National Emergency Department Information System of South Korea, which was coded by the International Classification of Diseases, 10th Revision (ICD-10) code system. To develop the modified DGS based on ICD-10 code, we matched the selected ICD-10 codes with those of the PECARN DGS by the General Equivalence Mappings (GEMs). After converting ICD-10 codes to ICD-9 codes by GEMs, we matched ICD-9 codes into PECARN DGS categories using the matrix developed by PECARN group. Lastly, we conducted the expert panel survey using Delphi method for the remaining diagnosis codes that were not matched. A total of 1879 ICD-10 codes were used in development of the modified DGS. After 1078 (57.4%) of 1879 ICD-10 codes were assigned to the modified DGS by GEM and PECARN conversion tools, investigators assigned each of the remaining 801 codes (42.6%) to DGS subgroups by 2 rounds of electronic Delphi surveys. And we assigned the remaining 29 codes (4%) into the modified DGS at the second expert consensus meeting. The modified DGS accounts for 98.7% and 95.2% of diagnoses of the 2008 and 2009 National Emergency Department Information System data set. This modified DGS also exhibited strong construct validity using the concepts of age, sex, site of care, and seasons. This also reflected the 2009 outbreak of H1N1 influenza in Korea. We developed and validated clinically feasible and sensible DGS system for describing pediatric emergent conditions in Korea. The modified PECARN DGS showed good comprehensiveness and demonstrated reliable construct validity. This modified DGS based on PECARN DGS framework may be effectively implemented for research, reporting, and resource planning in pediatric emergency system of South Korea.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
Further Validation of a CFD Code for Calculating the Performance of Two-Stage Light Gas Guns
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.
2017-01-01
Earlier validations of a higher-order Godunov code for modeling the performance of two-stage light gas guns are reviewed. These validation comparisons were made between code predictions and experimental data from the NASA Ames 1.5" and 0.28" guns and covered muzzle velocities of 6.5 to 7.2 km/s. In the present report, five more series of code validation comparisons involving experimental data from the Ames 0.22" (1.28" pump tube diameter), 0.28", 0.50", 1.00" and 1.50" guns are presented. The total muzzle velocity range of the validation data presented herein is 3 to 11.3 km/s. The agreement between the experimental data and CFD results is judged to be very good. Muzzle velocities were predicted within 0.35 km/s for 74% of the cases studied with maximum differences being 0.5 km/s and for 4 out of 50 cases, 0.5 - 0.7 km/s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.
2011-02-01
This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs,more » and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.« less
NASA Astrophysics Data System (ADS)
Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.
2014-06-01
The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.
Criticality Calculations with MCNP6 - Practical Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less
Organizational Effectiveness Information System (OEIS) User’s Manual
1986-09-01
SUBJECT CODES B-l C. LISTING OF VALID RESOURCE SYSTEM CODES C-l »TflerÄ*w»fi*%f*fc**v.nft; ^’.A/.V. A y.A/.AAA«•.*-A/. AAV ...the valid codes used la the Implementation and Design System. MACOM 01 COE 02 DARCOM 03 EUSA 04 FORSCOM 05 HSC 06 HQDA 07 INSCOM 08 MDW 09
Greiver, Michelle; Wintemute, Kimberly; Aliarzadeh, Babak; Martin, Ken; Khan, Shahriar; Jackson, Dave; Leggett, Jannet; Lambert-Lanning, Anita; Siu, Maggie
2016-10-12
Consistent and standardized coding for chronic conditions is associated with better care; however, coding may currently be limited in electronic medical records (EMRs) used in Canadian primary care.Objectives To implement data management activities in a community-based primary care organisation and to evaluate the effects on coding for chronic conditions. Fifty-nine family physicians in Toronto, Ontario, belonging to a single primary care organisation, participated in the study. The organisation implemented a central analytical data repository containing their EMR data extracted, cleaned, standardized and returned by the Canadian Primary Care Sentinel Surveillance Network (CPCSSN), a large validated primary care EMR-based database. They used reporting software provided by CPCSSN to identify selected chronic conditions and standardized codes were then added back to the EMR. We studied four chronic conditions (diabetes, hypertension, chronic obstructive pulmonary disease and dementia). We compared changes in coding over six months for physicians in the organisation with changes for 315 primary care physicians participating in CPCSSN across Canada. Chronic disease coding within the organisation increased significantly more than in other primary care sites. The adjusted difference in the increase of coding was 7.7% (95% confidence interval 7.1%-8.2%, p < 0.01). The use of standard codes, consisting of the most common diagnostic codes for each condition in the CPCSSN database, increased by 8.9% more (95% CI 8.3%-9.5%, p < 0.01). Data management activities were associated with an increase in standardized coding for chronic conditions. Exploring requirements to scale and spread this approach in Canadian primary care organisations may be worthwhile.
Komeda, Masao; Kawasaki, Kozo; Obara, Toru
2013-04-01
We studied a new silicon irradiation holder with a neutron filter designed to make the vertical neutron flux profile uniform. Since an irradiation holder has to be made of a low activation material, we applied aluminum blended with B4C as the holder material. Irradiation methods to achieve uniform flux with a filter are discussed using Monte-Carlo calculation code MVP. Validation of the use of the MVP code for the holder's analyses is also discussed via characteristic experiments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Validity and reliability of the Fitbit Zip as a measure of preschool children’s step count
Sharp, Catherine A; Mackintosh, Kelly A; Erjavec, Mihela; Pascoe, Duncan M; Horne, Pauline J
2017-01-01
Objectives Validation of physical activity measurement tools is essential to determine the relationship between physical activity and health in preschool children, but research to date has not focused on this priority. The aims of this study were to ascertain inter-rater reliability of observer step count, and interdevice reliability and validity of Fitbit Zip accelerometer step counts in preschool children. Methods Fifty-six children aged 3–4 years (29 girls) recruited from 10 nurseries in North Wales, UK, wore two Fitbit Zip accelerometers while performing a timed walking task in their childcare settings. Accelerometers were worn in secure pockets inside a custom-made tabard. Video recordings enabled two observers to independently code the number of steps performed in 3 min by each child during the walking task. Intraclass correlations (ICCs), concordance correlation coefficients, Bland-Altman plots and absolute per cent error were calculated to assess the reliability and validity of the consumer-grade device. Results An excellent ICC was found between the two observer codings (ICC=1.00) and the two Fitbit Zips (ICC=0.91). Concordance between the Fitbit Zips and observer counts was also high (r=0.77), with an acceptable absolute per cent error (6%–7%). Bland-Altman analyses identified a bias for Fitbit 1 of 22.8±19.1 steps with limits of agreement between −14.7 and 60.2 steps, and a bias for Fitbit 2 of 25.2±23.2 steps with limits of agreement between −20.2 and 70.5 steps. Conclusions Fitbit Zip accelerometers are a reliable and valid method of recording preschool children’s step count in a childcare setting. PMID:29081984
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
Sukanya, Chongthawonsatid
2017-10-01
This study examined the validity of the principal diagnoses on discharge summaries and coding assessments. Data were collected from the National Health Security Office (NHSO) of Thailand in 2015. In total, 118,971 medical records were audited. The sample was drawn from government hospitals and private hospitals covered by the Universal Coverage Scheme in Thailand. Hospitals and cases were selected using NHSO criteria. The validity of the principal diagnoses listed in the "Summary and Coding Assessment" forms was established by comparing data from the discharge summaries with data obtained from medical record reviews, and additionally, by comparing data from the coding assessments with data in the computerized ICD (the data base used for reimbursement-purposes). The summary assessments had low sensitivities (7.3%-37.9%), high specificities (97.2%-99.8%), low positive predictive values (9.2%-60.7%), and high negative predictive values (95.9%-99.3%). The coding assessments had low sensitivities (31.1%-69.4%), high specificities (99.0%-99.9%), moderate positive predictive values (43.8%-89.0%), and high negative predictive values (97.3%-99.5%). The discharge summaries and codings often contained mistakes, particularly the categories "Endocrine, nutritional, and metabolic diseases", "Symptoms, signs, and abnormal clinical and laboratory findings not elsewhere classified", "Factors influencing health status and contact with health services", and "Injury, poisoning, and certain other consequences of external causes". The validity of the principal diagnoses on the summary and coding assessment forms was found to be low. The training of physicians and coders must be strengthened to improve the validity of discharge summaries and codings.
Extension, validation and application of the NASCAP code
NASA Technical Reports Server (NTRS)
Katz, I.; Cassidy, J. J., III; Mandell, M. J.; Schnuelle, G. W.; Steen, P. G.; Parks, D. E.; Rotenberg, M.; Alexander, J. H.
1979-01-01
Numerous extensions were made in the NASCAP code. They fall into three categories: a greater range of definable objects, a more sophisticated computational model, and simplified code structure and usage. An important validation of NASCAP was performed using a new two dimensional computer code (TWOD). An interactive code (MATCHG) was written to compare material parameter inputs with charging results. The first major application of NASCAP was performed on the SCATHA satellite. Shadowing and charging calculation were completed. NASCAP was installed at the Air Force Geophysics Laboratory, where researchers plan to use it to interpret SCATHA data.
2016 Annual Progress Report: DOE Hydrogen and Fuel Cells Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The 2016 Annual Progress Report summarizes fiscal year 2016 activities and accomplishments by projects funded by the DOE Hydrogen and Fuel Cells Program. It covers the program areas of hydrogen production; hydrogen delivery; hydrogen storage; fuel cells; manufacturing R&D; technology validation; safety, codes and standards; systems analysis; market transformation; and Small Business Innovation Research projects.
Challenges in using medicaid claims to ascertain child maltreatment.
Raghavan, Ramesh; Brown, Derek S; Allaire, Benjamin T; Garfield, Lauren D; Ross, Raven E; Hedeker, Donald
2015-05-01
Medicaid data contain International Classification of Diseases, Clinical Modification (ICD-9-CM) codes indicating maltreatment, yet there is a little information on how valid these codes are for the purposes of identifying maltreatment from health, as opposed to child welfare, data. This study assessed the validity of Medicaid codes in identifying maltreatment. Participants (n = 2,136) in the first National Survey of Child and Adolescent Well-Being were linked to their Medicaid claims obtained from 36 states. Caseworker determinations of maltreatment were compared with eight sets of ICD-9-CM codes. Of the 1,921 children identified by caseworkers as being maltreated, 15.2% had any relevant ICD-9-CM code in any of their Medicaid files across 4 years of observation. Maltreated boys and those of African American race had lower odds of displaying a maltreatment code. Using only Medicaid claims to identify maltreated children creates validity problems. Medicaid data linkage with other types of administrative data is required to better identify maltreated children. © The Author(s) 2014.
Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.; Lee, Chi-Miag (Technical Monitor)
2001-01-01
For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this paper, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery for space launch vehicle propulsion systems.
Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code
NASA Technical Reports Server (NTRS)
Gaugfer, Raymond E.
2002-01-01
For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.
Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier Stokes Heat Transfer Code
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.
2002-01-01
For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid beat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.
Experimental Validation of an Ion Beam Optics Code with a Visualized Ion Thruster
NASA Astrophysics Data System (ADS)
Nakayama, Yoshinori; Nakano, Masakatsu
For validation of an ion beam optics code, the behavior of ion beam optics was experimentally observed and evaluated with a two-dimensional visualized ion thruster (VIT). Since the observed beam focus positions, sheath positions and measured ion beam currents were in good agreement with the numerical results, it was confirmed that the numerical model of this code was appropriated. In addition, it was also confirmed that the beam focus position was moved on center axis of grid hole according to the applied grid potentials, which differs from conventional understanding/assumption. The VIT operations may be useful not only for the validation of ion beam optics codes but also for the fundamental and intuitive understanding of the Child Law Sheath theory.
Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.
2005-01-01
In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.
Adams, Derk; Schreuder, Astrid B; Salottolo, Kristin; Settell, April; Goss, J Richard
2011-07-01
There are significant changes in the abbreviated injury scale (AIS) 2005 system, which make it impractical to compare patients coded in AIS version 98 with patients coded in AIS version 2005. Harborview Medical Center created a computer algorithm "Harborview AIS Mapping Program (HAMP)" to automatically convert AIS 2005 to AIS 98 injury codes. The mapping was validated using 6 months of double-coded patient injury records from a Level I Trauma Center. HAMP was used to determine how closely individual AIS and injury severity scores (ISS) were converted from AIS 2005 to AIS 98 versions. The kappa statistic was used to measure the agreement between manually determined codes and HAMP-derived codes. Seven hundred forty-nine patient records were used for validation. For the conversion of AIS codes, the measure of agreement between HAMP and manually determined codes was [kappa] = 0.84 (95% confidence interval, 0.82-0.86). The algorithm errors were smaller in magnitude than the manually determined coding errors. For the conversion of ISS, the agreement between HAMP versus manually determined ISS was [kappa] = 0.81 (95% confidence interval, 0.78-0.84). The HAMP algorithm successfully converted injuries coded in AIS 2005 to AIS 98. This algorithm will be useful when comparing trauma patient clinical data across populations coded in different versions, especially for longitudinal studies.
Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Beevers, Carol; De Boeck, Marlies; Burlinson, Brian; Hobbs, Cheryl A; Kitamoto, Sachiko; Kraynak, Andrew R; McNamee, James; Nakagawa, Yuzuki; Pant, Kamala; Plappert-Helbig, Ulla; Priestley, Catherine; Takasawa, Hironao; Wada, Kunio; Wirnitzer, Uta; Asano, Norihide; Escobar, Patricia A; Lovell, David; Morita, Takeshi; Nakajima, Madoka; Ohno, Yasuo; Hayashi, Makoto
2015-07-01
The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this exercise was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The study protocol was optimized in the pre-validation studies, and then the definitive (4th phase) validation study was conducted in two steps. In the 1st step, assay reproducibility was confirmed among laboratories using four coded reference chemicals and the positive control ethyl methanesulfonate. In the 2nd step, the predictive capability was investigated using 40 coded chemicals with known genotoxic and carcinogenic activity (i.e., genotoxic carcinogens, genotoxic non-carcinogens, non-genotoxic carcinogens, and non-genotoxic non-carcinogens). Based on the results obtained, the in vivo comet assay is concluded to be highly capable of identifying genotoxic chemicals and therefore can serve as a reliable predictor of rodent carcinogenicity. Copyright © 2015 Elsevier B.V. All rights reserved.
Verification and validation of RADMODL Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimball, K.D.
1993-03-01
RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transportmore » of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.« less
WEC-SIM Validation Testing Plan FY14 Q4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, Kelley Michelle
2016-02-01
The WEC-Sim project is currently on track, having met both the SNL and NREL FY14 Milestones, as shown in Table 1 and Table 2. This is also reflected in the Gantt chart uploaded to the WEC-Sim SharePoint site in the FY14 Q4 Deliverables folder. The work completed in FY14 includes code verification through code-to-code comparison (FY14 Q1 and Q2), preliminary code validation through comparison to experimental data (FY14 Q2 and Q3), presentation and publication of the WEC-Sim project at OMAE 2014 [1], [2], [3] and GMREC/METS 2014 [4] (FY14 Q3), WEC-Sim code development and public open-source release (FY14 Q3), andmore » development of a preliminary WEC-Sim validation test plan (FY14 Q4). This report presents the preliminary Validation Testing Plan developed in FY14 Q4. The validation test effort started in FY14 Q4 and will go on through FY15. Thus far the team has developed a device selection method, selected a device, and placed a contract with the testing facility, established several collaborations including industry contacts, and have working ideas on the testing details such as scaling, device design, and test conditions.« less
Rudmik, Luke; Xu, Yuan; Kukec, Edward; Liu, Mingfu; Dean, Stafford; Quan, Hude
2016-11-01
Pharmacoepidemiological research using administrative databases has become increasingly popular for chronic rhinosinusitis (CRS); however, without a validated case definition the cohort evaluated may be inaccurate resulting in biased and incorrect outcomes. The objective of this study was to develop and validate a generalizable administrative database case definition for CRS using International Classification of Diseases, 9th edition (ICD-9)-coded claims. A random sample of 100 patients with a guideline-based diagnosis of CRS and 100 control patients were selected and then linked to a Canadian physician claims database from March 31, 2010, to March 31, 2015. The proportion of CRS ICD-9-coded claims (473.x and 471.x) for each of these 200 patients were reviewed and the validity of 7 different ICD-9-based coding algorithms was evaluated. The CRS case definition of ≥2 claims with a CRS ICD-9 code (471.x or 473.x) within 2 years of the reference case provides a balanced validity with a sensitivity of 77% and specificity of 79%. Applying this CRS case definition to the claims database produced a CRS cohort of 51,000 patients with characteristics that were consistent with published demographics and rates of comorbid asthma, allergic rhinitis, and depression. This study has validated several coding algorithms; based on the results a case definition of ≥2 physician claims of CRS (ICD-9 of 471.x or 473.x) within 2 years provides an optimal level of validity. Future studies will need to validate this administrative case definition from different health system perspectives and using larger retrospective chart reviews from multiple providers. © 2016 ARS-AAOA, LLC.
NASA Astrophysics Data System (ADS)
Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.
2008-12-01
Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.
Fuel-Air Mixing and Combustion in Scramjets
NASA Technical Reports Server (NTRS)
Drummond, J. P.; Diskin, Glenn S.; Cutler, A. D.
2002-01-01
Activities in the area of scramjet fuel-air mixing and combustion associated with the Research and Technology Organization Working Group on Technologies for Propelled Hypersonic Flight are described. Work discussed in this paper has centered on the design of two basic experiments for studying the mixing and combustion of fuel and air in a scramjet. Simulations were conducted to aid in the design of these experiments. The experimental models were then constructed, and data were collected in the laboratory. Comparison of the data from a coaxial jet mixing experiment and a supersonic combustor experiment with a combustor code were then made and described. This work was conducted by NATO to validate combustion codes currently employed in scramjet design and to aid in the development of improved turbulence and combustion models employed by the codes.
The Proteus Navier-Stokes code
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Bui, Trong T.; Cavicchi, Richard H.; Conley, Julianne M.; Molls, Frank B.; Schwab, John R.
1992-01-01
An effort is currently underway at NASA Lewis to develop two- and three-dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. The emphasis in the development of Proteus is not algorithm development or research on numerical methods, but rather the development of the code itself. The objective is to develop codes that are user-oriented, easily-modified, and well-documented. Well-proven, state-of-the-art solution algorithms are being used. Code readability, documentation (both internal and external), and validation are being emphasized. This paper is a status report on the Proteus development effort. The analysis and solution procedure are described briefly, and the various features in the code are summarized. The results from some of the validation cases that have been run are presented for both the two- and three-dimensional codes.
NASA Astrophysics Data System (ADS)
Nilsson, E.; Decker, J.; Peysson, Y.; Artaud, J.-F.; Ekedahl, A.; Hillairet, J.; Aniel, T.; Basiuk, V.; Goniche, M.; Imbeaux, F.; Mazon, D.; Sharma, P.
2013-08-01
Fully non-inductive operation with lower hybrid current drive (LHCD) in the Tore Supra tokamak is achieved using either a fully active multijunction (FAM) launcher or a more recent ITER-relevant passive active multijunction (PAM) launcher, or both launchers simultaneously. While both antennas show comparable experimental efficiencies, the analysis of stability properties in long discharges suggest different current profiles. We present comparative modelling of LHCD with the two different launchers to characterize the effect of the respective antenna spectra on the driven current profile. The interpretative modelling of LHCD is carried out using a chain of codes calculating, respectively, the global discharge evolution (tokamak simulator METIS), the spectrum at the antenna mouth (LH coupling code ALOHA), the LH wave propagation (ray-tracing code C3PO), and the distribution function (3D Fokker-Planck code LUKE). Essential aspects of the fast electron dynamics in time, space and energy are obtained from hard x-ray measurements of fast electron bremsstrahlung emission using a dedicated tomographic system. LHCD simulations are validated by systematic comparisons between these experimental measurements and the reconstructed signal calculated by the code R5X2 from the LUKE electron distribution. An excellent agreement is obtained in the presence of strong Landau damping (found under low density and high-power conditions in Tore Supra) for which the ray-tracing model is valid for modelling the LH wave propagation. Two aspects of the antenna spectra are found to have a significant effect on LHCD. First, the driven current is found to be proportional to the directivity, which depends upon the respective weight of the main positive and main negative lobes and is particularly sensitive to the density in front of the antenna. Second, the position of the main negative lobe in the spectrum is different for the two launchers. As this lobe drives a counter-current, the resulting driven current profile is also different for the FAM and PAM launchers.
Genome-scale transcriptional activation by an engineered CRISPR-Cas9 complex.
Konermann, Silvana; Brigham, Mark D; Trevino, Alexandro E; Joung, Julia; Abudayyeh, Omar O; Barcena, Clea; Hsu, Patrick D; Habib, Naomi; Gootenberg, Jonathan S; Nishimasu, Hiroshi; Nureki, Osamu; Zhang, Feng
2015-01-29
Systematic interrogation of gene function requires the ability to perturb gene expression in a robust and generalizable manner. Here we describe structure-guided engineering of a CRISPR-Cas9 complex to mediate efficient transcriptional activation at endogenous genomic loci. We used these engineered Cas9 activation complexes to investigate single-guide RNA (sgRNA) targeting rules for effective transcriptional activation, to demonstrate multiplexed activation of ten genes simultaneously, and to upregulate long intergenic non-coding RNA (lincRNA) transcripts. We also synthesized a library consisting of 70,290 guides targeting all human RefSeq coding isoforms to screen for genes that, upon activation, confer resistance to a BRAF inhibitor. The top hits included genes previously shown to be able to confer resistance, and novel candidates were validated using individual sgRNA and complementary DNA overexpression. A gene expression signature based on the top screening hits correlated with markers of BRAF inhibitor resistance in cell lines and patient-derived samples. These results collectively demonstrate the potential of Cas9-based activators as a powerful genetic perturbation technology.
Development and Assessment of CTF for Pin-resolved BWR Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K; Wysocki, Aaron J; Collins, Benjamin S
2017-01-01
CTF is the modernized and improved version of the subchannel code, COBRA-TF. It has been adopted by the Consortium for Advanced Simulation for Light Water Reactors (CASL) for subchannel analysis applications and thermal hydraulic feedback calculations in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). CTF is now jointly developed by Oak Ridge National Laboratory and North Carolina State University. Until now, CTF has been used for pressurized water reactor modeling and simulation in CASL, but in the future it will be extended to boiling water reactor designs. This required development activities to integrate the code into the VERA-CSmore » workflow and to make it more ecient for full-core, pin resolved simulations. Additionally, there is a significant emphasis on producing high quality tools that follow a regimented software quality assurance plan in CASL. Part of this plan involves performing validation and verification assessments on the code that are easily repeatable and tied to specific code versions. This work has resulted in the CTF validation and verification matrix being expanded to include several two-phase flow experiments, including the General Electric 3 3 facility and the BWR Full-Size Fine Mesh Bundle Tests (BFBT). Comparisons with both experimental databases is reasonable, but the BFBT analysis reveals a tendency of CTF to overpredict void, especially in the slug flow regime. The execution of these tests is fully automated, analysis is documented in the CTF Validation and Verification manual, and the tests have become part of CASL continuous regression testing system. This paper will summarize these recent developments and some of the two-phase assessments that have been performed on CTF.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, Yinbin; Mo, Kun; Jamison, Laura M.
This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructure-basedmore » materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize the experimental efforts in FY16 including the following important experiments: (1) in-situ grain growth measurement of nano-grained UO 2; (2) investigation of surface morphology in micrograined UO 2; (3) Nano-indentation experiments on nano- and micro-grained UO 2. The highlight of this year is: we have successfully demonstrated our capability to in-situ measure grain size development while maintaining the stoichiometry of nano-grained UO 2 materials; the experiment is, for the first time, using synchrotron X-ray diffraction to in-situ measure grain growth behavior of UO 2.« less
Carnahan, Ryan M; Kee, Vicki R
2012-01-01
This paper aimed to systematically review algorithms to identify transfusion-related ABO incompatibility reactions in administrative data, with a focus on studies that have examined the validity of the algorithms. A literature search was conducted using PubMed, Iowa Drug Information Service database, and Embase. A Google Scholar search was also conducted because of the difficulty identifying relevant studies. Reviews were conducted by two investigators to identify studies using data sources from the USA or Canada because these data sources were most likely to reflect the coding practices of Mini-Sentinel data sources. One study was found that validated International Classification of Diseases (ICD-9-CM) codes representing transfusion reactions. None of these cases were ABO incompatibility reactions. Several studies consistently used ICD-9-CM code 999.6, which represents ABO incompatibility reactions, and a technical report identified the ICD-10 code for these reactions. One study included the E-code E8760 for mismatched blood in transfusion in the algorithm. Another study reported finding no ABO incompatibility reaction codes in the Healthcare Cost and Utilization Project Nationwide Inpatient Sample database, which contains data of 2.23 million patients who received transfusions, raising questions about the sensitivity of administrative data for identifying such reactions. Two studies reported perfect specificity, with sensitivity ranging from 21% to 83%, for the code identifying allogeneic red blood cell transfusions in hospitalized patients. There is no information to assess the validity of algorithms to identify transfusion-related ABO incompatibility reactions. Further information on the validity of algorithms to identify transfusions would also be useful. Copyright © 2012 John Wiley & Sons, Ltd.
CFD Code Development for Combustor Flows
NASA Technical Reports Server (NTRS)
Norris, Andrew
2003-01-01
During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.
Validity of the coding for herpes simplex encephalitis in the Danish National Patient Registry.
Jørgensen, Laura Krogh; Dalgaard, Lars Skov; Østergaard, Lars Jørgen; Andersen, Nanna Skaarup; Nørgaard, Mette; Mogensen, Trine Hyrup
2016-01-01
Large health care databases are a valuable source of infectious disease epidemiology if diagnoses are valid. The aim of this study was to investigate the accuracy of the recorded diagnosis coding of herpes simplex encephalitis (HSE) in the Danish National Patient Registry (DNPR). The DNPR was used to identify all hospitalized patients, aged ≥15 years, with a first-time diagnosis of HSE according to the International Classification of Diseases, tenth revision (ICD-10), from 2004 to 2014. To validate the coding of HSE, we collected data from the Danish Microbiology Database, from departments of clinical microbiology, and from patient medical records. Cases were classified as confirmed, probable, or no evidence of HSE. We estimated the positive predictive value (PPV) of the HSE diagnosis coding stratified by diagnosis type, study period, and department type. Furthermore, we estimated the proportion of HSE cases coded with nonspecific ICD-10 codes of viral encephalitis and also the sensitivity of the HSE diagnosis coding. We were able to validate 398 (94.3%) of the 422 HSE diagnoses identified via the DNPR. Hereof, 202 (50.8%) were classified as confirmed cases and 29 (7.3%) as probable cases providing an overall PPV of 58.0% (95% confidence interval [CI]: 53.0-62.9). For "Encephalitis due to herpes simplex virus" (ICD-10 code B00.4), the PPV was 56.6% (95% CI: 51.1-62.0). Similarly, the PPV for "Meningoencephalitis due to herpes simplex virus" (ICD-10 code B00.4A) was 56.8% (95% CI: 39.5-72.9). "Herpes viral encephalitis" (ICD-10 code G05.1E) had a PPV of 75.9% (95% CI: 56.5-89.7), thereby representing the highest PPV. The estimated sensitivity was 95.5%. The PPVs of the ICD-10 diagnosis coding for adult HSE in the DNPR were relatively low. Hence, the DNPR should be used with caution when studying patients with encephalitis caused by herpes simplex virus.
Eckert, Katharina G; Lange, Martin A
2015-03-14
Physical activity questionnaires (PAQ) have been extensively used to determine physical activity (PA) levels. Most PAQ are derived from an energy expenditure-based perspective and assess activities with a certain intensity level. Activities with a moderate or vigorous intensity level are predominantly used to determine a person's PA level in terms of quantity. Studies show that the time spent engaging in moderate and vigorous intensity PA does not appropriately reflect the actual PA behavior of older people because they perform more functional, everyday activities. Those functional activities are more likely to be considered low-intense and represent an important qualitative health-promoting activity. For the elderly, functional, light intensity activities are of special interest but are assessed differently in terms of quantity and quality. The aim was to analyze the content of PAQ for the elderly. N = 18 sufficiently validated PAQ applicable to adults (60+) were included. Each item (N = 414) was linked to the corresponding code of the International Classification of Functioning, Disability and Health (ICF) using established linking rules. Kappa statistics were calculated to determine rater agreement. Items were linked to 598 ICF codes and 62 different ICF categories. A total of 43.72% of the codes were for sports-related activities and 14.25% for walking-related activities. Only 9.18% of all codes were related to household tasks. Light intensity, functional activities are emphasized differently and are underrepresented in most cases. Additionally, sedentary activities are underrepresented (5.55%). κ coefficients were acceptable for n = 16 questionnaires (0.48-1.00). There is a large inconsistency in the understandings of PA in elderly. Further research should focus (1) on a conceptual understanding of PA in terms of the behavior of the elderly and (2) on developing questionnaires that inquire functional, light intensity PA, as well as sedentary activities more explicitly.
ERIC Educational Resources Information Center
Arffman, Inga
2016-01-01
Open-ended (OE) items are widely used to gather data on student performance in international achievement studies. However, several factors may threaten validity when using such items. This study examined Finnish coders' opinions about threats to validity when coding responses to OE items in the PISA 2012 problem-solving test. A total of 6…
Solution of the lossy nonlinear Tricomi equation with application to sonic boom focusing
NASA Astrophysics Data System (ADS)
Salamone, Joseph A., III
Sonic boom focusing theory has been augmented with new terms that account for mean flow effects in the direction of propagation and also for atmospheric absorption/dispersion due to molecular relaxation due to oxygen and nitrogen. The newly derived model equation was numerically implemented using a computer code. The computer code was numerically validated using a spectral solution for nonlinear propagation of a sinusoid through a lossy homogeneous medium. An additional numerical check was performed to verify the linear diffraction component of the code calculations. The computer code was experimentally validated using measured sonic boom focusing data from the NASA sponsored Superboom Caustic and Analysis Measurement Program (SCAMP) flight test. The computer code was in good agreement with both the numerical and experimental validation. The newly developed code was applied to examine the focusing of a NASA low-boom demonstration vehicle concept. The resulting pressure field was calculated for several supersonic climb profiles. The shaping efforts designed into the signatures were still somewhat evident despite the effects of sonic boom focusing.
Validation of NASA Thermal Ice Protection Computer Codes Part 2 - LEWICE/Thermal
DOT National Transportation Integrated Search
1996-01-01
The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center: LEWICE/Thermal 1 (electrothermal de-icing and anti-icing), and ANTICE 2 (hot gas and el...
NASA Astrophysics Data System (ADS)
Lee, Yi-Kang
2017-09-01
Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.
Tritium environmental transport studies at TFTR
NASA Astrophysics Data System (ADS)
Ritter, P. D.; Dolan, T. J.; Longhurst, G. R.
1993-06-01
Environmental tritium concentrations will be measured near the Tokamak Fusion Test Reactor (TFTR) to help validate dynamic models of tritium transport in the environment. For model validation the database must contain sequential measurements of tritium concentrations in key environmental compartments. Since complete containment of tritium is an operational goal, the supplementary monitoring program should be able to glean useful data from an unscheduled acute release. Portable air samplers will be used to take samples automatically every 4 hours for a week after an acute release, thus obtaining the time resolution needed for code validation. Samples of soil, vegetation, and foodstuffs will be gathered daily at the same locations as the active air monitors. The database may help validate the plant/soil/air part of tritium transport models and enhance environmental tritium transport understanding for the International Thermonuclear Experimental Reactor (ITER).
WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, Kelley; Michelen, Carlos; Bosma, Bret
2016-08-01
The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less
Supersonics Project - Airport Noise Tech Challenge
NASA Technical Reports Server (NTRS)
Bridges, James
2010-01-01
The Airport Noise Tech Challenge research effort under the Supersonics Project is reviewed. While the goal of "Improved supersonic jet noise models validated on innovative nozzle concepts" remains the same, the success of the research effort has caused the thrust of the research to be modified going forward in time. The main activities from FY06-10 focused on development and validation of jet noise prediction codes. This required innovative diagnostic techniques to be developed and deployed, extensive jet noise and flow databases to be created, and computational tools to be developed and validated. Furthermore, in FY09-10 systems studies commissioned by the Supersonics Project showed that viable supersonic aircraft were within reach using variable cycle engine architectures if exhaust nozzle technology could provide 3-5dB of suppression. The Project then began to focus on integrating the technologies being developed in its Tech Challenge areas to bring about successful system designs. Consequently, the Airport Noise Tech Challenge area has shifted efforts from developing jet noise prediction codes to using them to develop low-noise nozzle concepts for integration into supersonic aircraft. The new plan of research is briefly presented by technology and timelines.
Modeling activities on the negative-ion-based Neutral Beam Injectors of the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agostinetti, P.; Antoni, V.; Chitarin, G.
2011-09-26
At the National Institute for Fusion Science (NIFS) large-scaled negative ion sources have been widely used for the Neutral Beam Injectors (NBIs) mounted on the Large Helical Device (LHD), which is the world-largest superconducting helical system. These injectors have achieved outstanding performances in terms of beam energy, negative-ion current and optics, and represent a reference for the development of heating and current drive NBIs for ITER.In the framework of the support activities for the ITER NBIs, the PRIMA test facility, which includes a RF-drive ion source with 100 keV accelerator (SPIDER) and a complete 1 MeV Neutral Beam system (MITICA)more » is under construction at Consorzio RFX in Padova.An experimental validation of the codes has been undertaken in order to prove the accuracy of the simulations and the soundness of the SPIDER and MITICA design. To this purpose, the whole set of codes have been applied to the LHD NBIs in a joint activity between Consorzio RFX and NIFS, with the goal of comparing and benchmarking the codes with the experimental data. A description of these modeling activities and a discussion of the main results obtained are reported in this paper.« less
Validation of Ray Tracing Code Refraction Effects
NASA Technical Reports Server (NTRS)
Heath, Stephanie L.; McAninch, Gerry L.; Smith, Charles D.; Conner, David A.
2008-01-01
NASA's current predictive capabilities using the ray tracing program (RTP) are validated using helicopter noise data taken at Eglin Air Force Base in 2007. By including refractive propagation effects due to wind and temperature, the ray tracing code is able to explain large variations in the data observed during the flight test.
Validation of a Communication Process Measure for Coding Control in Counseling.
ERIC Educational Resources Information Center
Heatherington, Laurie
The increasingly popular view of the counseling process from an interactional perspective necessitates the development of new measurement instruments which are suitable to the study of the reciprocal interaction between people. The validity of the Relational Communication Coding System, an instrument which operationalizes the constructs of…
Dynamic Forces in Spur Gears - Measurement, Prediction, and Code Validation
NASA Technical Reports Server (NTRS)
Oswald, Fred B.; Townsend, Dennis P.; Rebbechi, Brian; Lin, Hsiang Hsi
1996-01-01
Measured and computed values for dynamic loads in spur gears were compared to validate a new version of the NASA gear dynamics code DANST-PC. Strain gage data from six gear sets with different tooth profiles were processed to determine the dynamic forces acting between the gear teeth. Results demonstrate that the analysis code successfully simulates the dynamic behavior of the gears. Differences between analysis and experiment were less than 10 percent under most conditions.
Rapid Trust Establishment for Transient Use of Unmanaged Hardware
2006-12-01
unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: Establishing...Validate OS Trusted Host OS (From Disk) Validate App 1 Untrusted code Trusted code (a) Boot with trust initiator ( b ) Boot trusted Host OS (c) Launch...be validated. Execution of process with Id 3535 has been blocked to minimize security risks. ( b ) Notification to the user from the trust alerter
A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.
ENEL overall PWR plant models and neutronic integrated computing systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedroni, G.; Pollachini, L.; Vimercati, G.
1987-01-01
To support the design activity of the Italian nuclear energy program for the construction of pressurized water reactors, the Italian Electricity Board (ENEL) needs to verify the design as a whole (that is, the nuclear steam supply system and balance of plant) both in steady-state operation and in transient. The ENEL has therefore developed two computer models to analyze both operational and incidental transients. The models, named STRIP and SFINCS, perform the analysis of the nuclear as well as the conventional part of the plant (the control system being properly taken into account). The STRIP model has been developed bymore » means of the French (Electricite de France) modular code SICLE, while SFINCS is based on the Italian (ENEL) modular code LEGO. STRIP validation was performed with respect to Fessenheim French power plant experimental data. Two significant transients were chosen: load step and total load rejection. SFINCS validation was performed with respect to Saint-Laurent French power plant experimental data and also by comparing the SFINCS-STRIP responses.« less
Greenslade, Kathryn J; Coggins, Truman E
2014-01-01
Identifying what a communication partner is looking at (referential intention) and why (social intention) is essential to successful social communication, and may be challenging for children with social communication deficits. This study explores a clinical task that assesses these intention-reading abilities within an authentic context. To gather evidence of the task's reliability and validity, and to discuss its clinical utility. The intention-reading task was administered to twenty 4-7-year-olds with typical development (TD) and ten with autism spectrum disorder (ASD). Task items were embedded in an authentic activity, and they targeted the child's ability to identify the examiner's referential and social intentions, which were communicated through joint attention behaviours. Reliability and construct validity evidence were addressed using established psychometric methods. Reliability and validity evidence supported the use of task scores for identifying children whose intention-reading warranted concern. Evidence supported the reliability of task administration and coding, and item-level codes were highly consistent with overall task performance. Supporting task validity, group differences aligned with predictions, with children with ASD exhibiting poorer and more variable task scores than children with TD. Also, as predicted, task scores correlated significantly with verbal mental age and ratings of parental concerns regarding social communication abilities. The evidence provides preliminary support for the reliability and validity of the clinical task's scores in assessing young children's real-time intention-reading abilities, which are essential for successful interactions in school and beyond. © 2014 Royal College of Speech and Language Therapists.
Fairclough, Stuart J; Weaver, R Glenn; Johnson, Siobhan; Rawlinson, Jack
2018-05-01
SOFIT+ is an observation tool to measure teacher practices related to moderate-to-vigorous physical activity (MVPA) promotion during physical education (PE). The objective of the study was to examine the validity of SOFIT+ during high school PE lessons. This cross-sectional, observational study tested the construct validity of SOFIT+ in boys' and girls' high school PE lessons. Twenty-one PE lessons were video-recorded and retrospectively coded using SOFIT+. Students wore hip-mounted accelerometers during lessons as an objective measure of MVPA. Multinomial logistic regression was used to estimate the likelihood of students engaging in MVPA during different teacher practices represented by observed individual codes and a combined SOFIT+ index-score. Fourteen individual SOFIT+ variables demonstrated a statistically significant relationship with girls' and boys' MVPA. Observed lesson segments identified as high MVPA-promoting were related to an increased likelihood of girls engaging in 5-10 (OR=2.86 [95% CI 2.41-3.40]), 15-25 (OR=7.41 [95% CI 6.05-9.06]), and 30-40 (OR=22.70 [95% CI 16.97-30.37])s of MVPA. For boys, observed high-MVPA promoting segments were related to an increased likelihood of engaging in 5-10 (OR=1.71 [95% CI 1.45-2.01]), 15-25 (OR=2.69 [95% CI 2.31-3.13]) and 30-40 (OR=4.26 [95% CI 3.44-5.29])s of MVPA. Teacher practices during high school PE lessons are significantly related to students' participation in MVPA. SOFIT+ is a valid and reliable tool to examine relationships between PE teacher practices and student MVPA during PE. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Pan, Yi-Ling; Hwang, Ai-Wen; Simeonsson, Rune J; Lu, Lu; Liao, Hua-Fang
2015-01-01
Comprehensive description of functioning is important in providing early intervention services for infants with developmental delay/disabilities (DD). A code set of the International Classification of Functioning, Disability and Health: Children and Youth Version (ICF-CY) could facilitate the practical use of the ICF-CY in team evaluation. The purpose of this study was to derive an ICF-CY code set for infants under three years of age with early delay and disabilities (EDD Code Set) for initial team evaluation. The EDD Code Set based on the ICF-CY was developed on the basis of a Delphi survey of international professionals experienced in implementing the ICF-CY and professionals in early intervention service system in Taiwan. Twenty-five professionals completed the Delphi survey. A total of 82 ICF-CY second-level categories were identified for the EDD Code Set, including 28 categories from the domain Activities and Participation, 29 from body functions, 10 from body structures and 15 from environmental factors. The EDD Code Set of 82 ICF-CY categories could be useful in multidisciplinary team evaluations to describe functioning of infants younger than three years of age with DD, in a holistic manner. Future validation of the EDD Code Set and examination of its clinical utility are needed. The EDD Code Set with 82 essential ICF-CY categories could be useful in the initial team evaluation as a common language to describe functioning of infants less than three years of age with developmental delay/disabilities, with a more holistic view. The EDD Code Set including essential categories in activities and participation, body functions, body structures and environmental factors could be used to create a functional profile for each infant with special needs and to clarify the interaction of child and environment accounting for the child's functioning.
CFD Modeling of Free-Piston Stirling Engines
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.
2001-01-01
NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit
1992-01-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Astrophysics Data System (ADS)
Chitsomboon, Tawit
1992-02-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Evaluation in industry of a draft code of practice for manual handling.
Ashby, Liz; Tappin, David; Bentley, Tim
2004-05-01
This paper reports findings from a study which evaluated the draft New Zealand Code of Practice for Manual Handling. The evaluation assessed the ease of use, applicability and validity of the Code and in particular the associated manual handling hazard assessment tools, within New Zealand industry. The Code was studied in a sample of eight companies from four sectors of industry. Subjective feedback and objective findings indicated that the Code was useful, applicable and informative. The manual handling hazard assessment tools incorporated in the Code could be adequately applied by most users, with risk assessment outcomes largely consistent with the findings of researchers using more specific ergonomics methodologies. However, some changes were recommended to the risk assessment tools to improve usability and validity. The evaluation concluded that both the Code and the tools within it would benefit from simplification, improved typography and layout, and industry-specific information on manual handling hazards.
Validation of Living Donor Nephrectomy Codes
Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.
2018-01-01
Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679
Billing code algorithms to identify cases of peripheral artery disease from administrative data
Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J
2013-01-01
Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724
Validation Data and Model Development for Fuel Assembly Response to Seismic Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardet, Philippe; Ricciardi, Guillaume
2016-01-31
Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less
NASA Technical Reports Server (NTRS)
Weed, Richard Allen; Sankar, L. N.
1994-01-01
An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.
Cohen, Alysia; McDonald, Samantha; McIver, Kerry; Pate, Russell; Trost, Stewart
2014-05-01
The purpose of this study was to evaluate the validity and interrater reliability of the Observational System for Recording Activity in Children: Youth Sports (OSRAC:YS). Children (N = 29) participating in a parks and recreation soccer program were observed during regularly scheduled practices. Physical activity (PA) intensity and contextual factors were recorded by momentary time-sampling procedures (10-second observe, 20-second record). Two observers simultaneously observed and recorded children's PA intensity, practice context, social context, coach behavior, and coach proximity. Interrater reliability was based on agreement (Kappa) between the observer's coding for each category, and the Intraclass Correlation Coefficient (ICC) for percent of time spent in MVPA. Validity was assessed by calculating the correlation between OSRAC:YS estimated and objectively measured MVPA. Kappa statistics for each category demonstrated substantial to almost perfect interobserver agreement (Kappa = 0.67-0.93). The ICC for percent time in MVPA was 0.76 (95% C.I. = 0.49-0.90). A significant correlation (r = .73) was observed for MVPA recorded by observation and MVPA measured via accelerometry. The results indicate the OSRAC:YS is a reliable and valid tool for measuring children's PA and contextual factors during a youth soccer practice.
Experimental program for real gas flow code validation at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Deiwert, George S.; Strawa, Anthony W.; Sharma, Surendra P.; Park, Chul
1989-01-01
The experimental program for validating real gas hypersonic flow codes at NASA Ames Rsearch Center is described. Ground-based test facilities used include ballistic ranges, shock tubes and shock tunnels, arc jet facilities and heated-air hypersonic wind tunnels. Also included are large-scale computer systems for kinetic theory simulations and benchmark code solutions. Flight tests consist of the Aeroassist Flight Experiment, the Space Shuttle, Project Fire 2, and planetary probes such as Galileo, Pioneer Venus, and PAET.
Validation: Codes to compare simulation data to various observations
NASA Astrophysics Data System (ADS)
Cohn, J. D.
2017-02-01
Validation provides codes to compare several observations to simulated data with stellar mass and star formation rate, simulated data stellar mass function with observed stellar mass function from PRIMUS or SDSS-GALEX in several redshift bins from 0.01-1.0, and simulated data B band luminosity function with observed stellar mass function, and to create plots for various attributes, including stellar mass functions, and stellar mass to halo mass. These codes can model predictions (in some cases alongside observational data) to test other mock catalogs.
Barnado, April; Casey, Carolyn; Carroll, Robert J; Wheless, Lee; Denny, Joshua C; Crofford, Leslie J
2017-05-01
To study systemic lupus erythematosus (SLE) in the electronic health record (EHR), we must accurately identify patients with SLE. Our objective was to develop and validate novel EHR algorithms that use International Classification of Diseases, Ninth Revision (ICD-9), Clinical Modification codes, laboratory testing, and medications to identify SLE patients. We used Vanderbilt's Synthetic Derivative, a de-identified version of the EHR, with 2.5 million subjects. We selected all individuals with at least 1 SLE ICD-9 code (710.0), yielding 5,959 individuals. To create a training set, 200 subjects were randomly selected for chart review. A subject was defined as a case if diagnosed with SLE by a rheumatologist, nephrologist, or dermatologist. Positive predictive values (PPVs) and sensitivity were calculated for combinations of code counts of the SLE ICD-9 code, a positive antinuclear antibody (ANA), ever use of medications, and a keyword of "lupus" in the problem list. The algorithms with the highest PPV were each internally validated using a random set of 100 individuals from the remaining 5,759 subjects. The algorithm with the highest PPV at 95% in the training set and 91% in the validation set was 3 or more counts of the SLE ICD-9 code, ANA positive (≥1:40), and ever use of both disease-modifying antirheumatic drugs and steroids, while excluding individuals with systemic sclerosis and dermatomyositis ICD-9 codes. We developed and validated the first EHR algorithm that incorporates laboratory values and medications with the SLE ICD-9 code to identify patients with SLE accurately. © 2016, American College of Rheumatology.
VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.
2015-12-01
A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.
NASA Rotor 37 CFD Code Validation: Glenn-HT Code
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2010-01-01
In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.
Shinde, Satomi K.; Danov, Stacy; Chen, Chin-Chih; Clary, Jamie; Harper, Vicki; Bodfish, James W.; Symons, Frank J.
2014-01-01
Objectives The main aim of the study was to generate initial convergent validity evidence for the Pain and Discomfort Scale (PADS) for use with non-verbal adults with intellectual disabilities (ID). Methods Forty-four adults with intellectual disability (mean age = 46, 52 % male) were evaluated using a standardized sham-controlled and blinded sensory testing protocol, from which FACS and PADS scores were tested for (1) sensitivity to an array of calibrated sensory stimuli, (2) specificity (active vs. sham trials), and (3) concordance. Results The primary findings were that participants were reliably coded using both FACS and PADS approaches as being reactive to the sensory stimuli (FACS: F[2, 86] = 4.71, P < .05, PADS: F[2, 86] = 21.49, P < .05) (sensitivity evidence), not reactive during the sham stimulus trials (FACS: F[1, 43]= 3.77, p = .06, PADS: F[1, 43] = 5.87, p = .02) (specificity evidence), and there were significant (r = .41 – .51, p < .01) correlations between PADS and FACS (convergent validity evidence). Discussion FACS is an objective coding platform for facial expression. It requires intensive training and resources for scoring. As such it may be limited for clinical application. PADS was designed for clinical application. PADS scores were comparable to FACS scores under controlled evaluation conditions providing partial convergent validity evidence for its use. PMID:24135902
Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica
2013-01-01
Background Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC ICD-9 codes, and evaluated whether natural language processing (NLP) by the Automated Retrieval Console (ARC) for document classification improves HCC identification. Methods We identified a cohort of patients with ICD-9 codes for HCC during 2005–2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared to manual classification. PPV, sensitivity, and specificity of ARC were calculated. Results 1138 patients with HCC were identified by ICD-9 codes. Based on manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. Conclusion A combined approach of ICD-9 codes and NLP of pathology and radiology reports improves HCC case identification in automated data. PMID:23929403
Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica
2016-02-01
Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.
Gupta, Sumit; Nathan, Paul C; Baxter, Nancy N; Lau, Cindy; Daly, Corinne; Pole, Jason D
2018-06-01
Despite the importance of estimating population level cancer outcomes, most registries do not collect critical events such as relapse. Attempts to use health administrative data to identify these events have focused on older adults and have been mostly unsuccessful. We developed and tested administrative data-based algorithms in a population-based cohort of adolescents and young adults with cancer. We identified all Ontario adolescents and young adults 15-21 years old diagnosed with leukemia, lymphoma, sarcoma, or testicular cancer between 1992-2012. Chart abstraction determined the end of initial treatment (EOIT) date and subsequent cancer-related events (progression, relapse, second cancer). Linkage to population-based administrative databases identified fee and procedure codes indicating cancer treatment or palliative care. Algorithms determining EOIT based on a time interval free of treatment-associated codes, and new cancer-related events based on billing codes, were compared with chart-abstracted data. The cohort comprised 1404 patients. Time periods free of treatment-associated codes did not validly identify EOIT dates; using subsequent codes to identify new cancer events was thus associated with low sensitivity (56.2%). However, using administrative data codes that occurred after the EOIT date based on chart abstraction, the first cancer-related event was identified with excellent validity (sensitivity, 87.0%; specificity, 93.3%; positive predictive value, 81.5%; negative predictive value, 95.5%). Although administrative data alone did not validly identify cancer-related events, administrative data in combination with chart collected EOIT dates was associated with excellent validity. The collection of EOIT dates by cancer registries would significantly expand the potential of administrative data linkage to assess cancer outcomes.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
2010-01-01
Background In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. Methods SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). Results For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease. The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Conclusions Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found. PMID:20416069
Hjerpe, Per; Merlo, Juan; Ohlsson, Henrik; Bengtsson Boström, Kristina; Lindblad, Ulf
2010-04-23
In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease.The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found.
Validity of the coding for herpes simplex encephalitis in the Danish National Patient Registry
Jørgensen, Laura Krogh; Dalgaard, Lars Skov; Østergaard, Lars Jørgen; Andersen, Nanna Skaarup; Nørgaard, Mette; Mogensen, Trine Hyrup
2016-01-01
Background Large health care databases are a valuable source of infectious disease epidemiology if diagnoses are valid. The aim of this study was to investigate the accuracy of the recorded diagnosis coding of herpes simplex encephalitis (HSE) in the Danish National Patient Registry (DNPR). Methods The DNPR was used to identify all hospitalized patients, aged ≥15 years, with a first-time diagnosis of HSE according to the International Classification of Diseases, tenth revision (ICD-10), from 2004 to 2014. To validate the coding of HSE, we collected data from the Danish Microbiology Database, from departments of clinical microbiology, and from patient medical records. Cases were classified as confirmed, probable, or no evidence of HSE. We estimated the positive predictive value (PPV) of the HSE diagnosis coding stratified by diagnosis type, study period, and department type. Furthermore, we estimated the proportion of HSE cases coded with nonspecific ICD-10 codes of viral encephalitis and also the sensitivity of the HSE diagnosis coding. Results We were able to validate 398 (94.3%) of the 422 HSE diagnoses identified via the DNPR. Hereof, 202 (50.8%) were classified as confirmed cases and 29 (7.3%) as probable cases providing an overall PPV of 58.0% (95% confidence interval [CI]: 53.0–62.9). For “Encephalitis due to herpes simplex virus” (ICD-10 code B00.4), the PPV was 56.6% (95% CI: 51.1–62.0). Similarly, the PPV for “Meningoencephalitis due to herpes simplex virus” (ICD-10 code B00.4A) was 56.8% (95% CI: 39.5–72.9). “Herpes viral encephalitis” (ICD-10 code G05.1E) had a PPV of 75.9% (95% CI: 56.5–89.7), thereby representing the highest PPV. The estimated sensitivity was 95.5%. Conclusion The PPVs of the ICD-10 diagnosis coding for adult HSE in the DNPR were relatively low. Hence, the DNPR should be used with caution when studying patients with encephalitis caused by herpes simplex virus. PMID:27330328
Cadieux, Geneviève; Tamblyn, Robyn; Buckeridge, David L; Dendukuri, Nandini
2017-08-01
Valid measurement of outcomes such as disease prevalence using health care utilization data is fundamental to the implementation of a "learning health system." Definitions of such outcomes can be complex, based on multiple diagnostic codes. The literature on validating such data demonstrates a lack of awareness of the need for a stratified sampling design and corresponding statistical methods. We propose a method for validating the measurement of diagnostic groups that have: (1) different prevalences of diagnostic codes within the group; and (2) low prevalence. We describe an estimation method whereby: (1) low-prevalence diagnostic codes are oversampled, and the positive predictive value (PPV) of the diagnostic group is estimated as a weighted average of the PPV of each diagnostic code; and (2) claims that fall within a low-prevalence diagnostic group are oversampled relative to claims that are not, and bias-adjusted estimators of sensitivity and specificity are generated. We illustrate our proposed method using an example from population health surveillance in which diagnostic groups are applied to physician claims to identify cases of acute respiratory illness. Failure to account for the prevalence of each diagnostic code within a diagnostic group leads to the underestimation of the PPV, because low-prevalence diagnostic codes are more likely to be false positives. Failure to adjust for oversampling of claims that fall within the low-prevalence diagnostic group relative to those that do not leads to the overestimation of sensitivity and underestimation of specificity.
Abraham, N S; Cohen, D C; Rivers, B; Richardson, P
2006-07-15
To validate veterans affairs (VA) administrative data for the diagnosis of nonsteroidal anti-inflammatory drug (NSAID)-related upper gastrointestinal events (UGIE) and to develop a diagnostic algorithm. A retrospective study of veterans prescribed an NSAID as identified from the national pharmacy database merged with in-patient and out-patient data, followed by primary chart abstraction. Contingency tables were constructed to allow comparison with a random sample of patients prescribed an NSAID, but without UGIE. Multivariable logistic regression analysis was used to derive a predictive algorithm. Once derived, the algorithm was validated in a separate cohort of veterans. Of 906 patients, 606 had a diagnostic code for UGIE; 300 were a random subsample of 11 744 patients (control). Only 161 had a confirmed UGIE. The positive predictive value (PPV) of diagnostic codes was poor, but improved from 27% to 51% with the addition of endoscopic procedural codes. The strongest predictors of UGIE were an in-patient ICD-9 code for gastric ulcer, duodenal ulcer and haemorrhage combined with upper endoscopy. This algorithm had a PPV of 73% when limited to patients >or=65 years (c-statistic 0.79). Validation of the algorithm revealed a PPV of 80% among patients with an overlapping NSAID prescription. NSAID-related UGIE can be assessed using VA administrative data. The optimal algorithm includes an in-patient ICD-9 code for gastric or duodenal ulcer and gastrointestinal bleeding combined with a procedural code for upper endoscopy.
Overview of hypersonic CFD code calibration studies
NASA Technical Reports Server (NTRS)
Miller, Charles G.
1987-01-01
The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.
Baiao, R; Baptista, J; Carneiro, A; Pinto, R; Toscano, C; Fearon, P; Soares, I; Mesquita, A R
2018-07-01
The preschool years are a period of great developmental achievements, which impact critically on a child's interactive skills. Having valid and reliable measures to assess interactive behaviour at this stage is therefore crucial. The aim of this study was to describe the adaptation and validation of the child coding of the Coding System for Mother-Child Interactions and discuss its applications and implications in future research and practice. Two hundred twenty Portuguese preschoolers and their mothers were videotaped during a structured task. Child and mother interactive behaviours were coded based on the task. Maternal reports on the child's temperament and emotional and behaviour problems were also collected, along with family psychosocial information. Interrater agreement was confirmed. The use of child Cooperation, Enthusiasm, and Negativity as subscales was supported by their correlations across tasks. Moreover, these subscales were correlated with each other, which supports the use of a global child interactive behaviour score. Convergent validity with a measure of emotional and behavioural problems (Child Behaviour Checklist 1 ½-5) was established, as well as divergent validity with a measure of temperament (Children's Behaviour Questionnaire-Short Form). Regarding associations with family variables, child interactive behaviour was only associated with maternal behaviour. Findings suggest that this coding system is a valid and reliable measure for assessing child interactive behaviour in preschool age children. It therefore represents an important alternative to this area of research and practice, with reduced costs and with more flexible training requirements. Attention should be given in future research to expanding this work to clinical populations and different age groups. © 2018 John Wiley & Sons Ltd.
Mertz, Marcel; Sofaer, Neema; Strech, Daniel
2014-09-27
The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations ("reason mentions") that were identified by the review to represent a reason in a given author's publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved?
NASA Technical Reports Server (NTRS)
Thompson, David E.
2005-01-01
Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.
CASL Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mousseau, Vincent Andrew; Dinh, Nam
2016-06-30
This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation andmore » verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.« less
Validation Results for LEWICE 2.0. [Supplement
NASA Technical Reports Server (NTRS)
Wright, William B.; Rutkowski, Adam
1999-01-01
Two CD-ROMs contain experimental ice shapes and code prediction used for validation of LEWICE 2.0 (see NASA/CR-1999-208690, CASI ID 19990021235). The data include ice shapes for both experiment and for LEWICE, all of the input and output files for the LEWICE cases, JPG files of all plots generated, an electronic copy of the text of the validation report, and a Microsoft Excel(R) spreadsheet containing all of the quantitative measurements taken. The LEWICE source code and executable are not contained on the discs.
Colour cyclic code for Brillouin distributed sensors
NASA Astrophysics Data System (ADS)
Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne
2015-09-01
For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.
Validation of NASA Thermal Ice Protection Computer Codes. Part 3; The Validation of Antice
NASA Technical Reports Server (NTRS)
Al-Khalil, Kamel M.; Horvath, Charles; Miller, Dean R.; Wright, William B.
2001-01-01
An experimental program was generated by the Icing Technology Branch at NASA Glenn Research Center to validate two ice protection simulation codes: (1) LEWICE/Thermal for transient electrothermal de-icing and anti-icing simulations, and (2) ANTICE for steady state hot gas and electrothermal anti-icing simulations. An electrothermal ice protection system was designed and constructed integral to a 36 inch chord NACA0012 airfoil. The model was fully instrumented with thermo-couples, RTD'S, and heat flux gages. Tests were conducted at several icing environmental conditions during a two week period at the NASA Glenn Icing Research Tunnel. Experimental results of running-wet and evaporative cases were compared to the ANTICE computer code predictions and are presented in this paper.
(NTF) National Transonic Facility Test 213-SFW Flow Control II,
2012-11-19
(NTF) National Transonic Facility Test 213-SFW Flow Control II, Fast-MAC Model: The fundamental Aerodynamics Subsonic Transonic-Modular Active Control (Fast-MAC) Model was tested for the 2nd time in the NTF. The objectives were to document the effects of Reynolds numbers on circulation control aerodynamics and to develop and open data set for CFD code validation. Image taken in building 1236, National Transonic Facility
Verification and Validation Strategy for LWRS Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carl M. Stoots; Richard R. Schultz; Hans D. Gougar
2012-09-01
One intension of the Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to create advanced computational tools for safety assessment that enable more accurate representation of a nuclear power plant safety margin. These tools are to be used to study the unique issues posed by lifetime extension and relicensing of the existing operating fleet of nuclear power plants well beyond their first license extension period. The extent to which new computational models / codes such as RELAP-7 can be used for reactor licensing / relicensing activities depends mainly upon the thoroughness with which they have been verifiedmore » and validated (V&V). This document outlines the LWRS program strategy by which RELAP-7 code V&V planning is to be accomplished. From the perspective of developing and applying thermal-hydraulic and reactivity-specific models to reactor systems, the US Nuclear Regulatory Commission (NRC) Regulatory Guide 1.203 gives key guidance to numeric model developers and those tasked with the validation of numeric models. By creating Regulatory Guide 1.203 the NRC defined a framework for development, assessment, and approval of transient and accident analysis methods. As a result, this methodology is very relevant and is recommended as the path forward for RELAP-7 V&V. However, the unique issues posed by lifetime extension will require considerations in addition to those addressed in Regulatory Guide 1.203. Some of these include prioritization of which plants / designs should be studied first, coupling modern supporting experiments to the stringent needs of new high fidelity models / codes, and scaling of aging effects.« less
Summary of papers on current and anticipated uses of thermal-hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, R.
1997-07-01
The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
[Orthopedic and trauma surgery in the German-DRG-System 2009].
Franz, D; Windolf, J; Siebert, C H; Roeder, N
2009-01-01
The German DRG-System was advanced into version 2009. For orthopedic and trauma surgery significant changes concerning coding of diagnoses, medical procedures and concerning the DRG-structure were made. Analysis of relevant diagnoses, medical procedures and G-DRGs in the versions 2008 and 2009 based on the publications of the German DRG-institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). Changes for 2009 focussed on the development of DRG-structure, DRG-validation and codes for medical procedures to be used for very complex cases. The outcome of these changes for German hospitals may vary depending in the range of activities. G-DRG-System gained complexity again. High demands are made on correct and complete coding of complex orthopedic and trauma surgery cases. Quality of case-allocation within the G-DRG-System was improved. Nevertheless, further adjustments of the G-DRG-System especially for cases with severe injuries are necessary.
GATA: A graphic alignment tool for comparative sequenceanalysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nix, David A.; Eisen, Michael B.
2005-01-01
Several problems exist with current methods used to align DNA sequences for comparative sequence analysis. Most dynamic programming algorithms assume that conserved sequence elements are collinear. This assumption appears valid when comparing orthologous protein coding sequences. Functional constraints on proteins provide strong selective pressure against sequence inversions, and minimize sequence duplications and feature shuffling. For non-coding sequences this collinearity assumption is often invalid. For example, enhancers contain clusters of transcription factor binding sites that change in number, orientation, and spacing during evolution yet the enhancer retains its activity. Dotplot analysis is often used to estimate non-coding sequence relatedness. Yet dotmore » plots do not actually align sequences and thus cannot account well for base insertions or deletions. Moreover, they lack an adequate statistical framework for comparing sequence relatedness and are limited to pairwise comparisons. Lastly, dot plots and dynamic programming text outputs fail to provide an intuitive means for visualizing DNA alignments.« less
Initial verification and validation of RAZORBACK - A research reactor transient analysis code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2015-09-01
This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamek, Julian; Daverio, David; Durrer, Ruth
We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less
Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Garg Vijay; Ameri, Ali
2005-01-01
The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.
Greenberg, Jacob K; Ladner, Travis R; Olsen, Margaret A; Shannon, Chevis N; Liu, Jingxia; Yarbrough, Chester K; Piccirillo, Jay F; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D
2015-08-01
The use of administrative billing data may enable large-scale assessments of treatment outcomes for Chiari Malformation type I (CM-1). However, to utilize such data sets, validated International Classification of Diseases, Ninth Revision (ICD-9-CM) code algorithms for identifying CM-1 surgery are needed. To validate 2 ICD-9-CM code algorithms identifying patients undergoing CM-1 decompression surgery. We retrospectively analyzed the validity of 2 ICD-9-CM code algorithms for identifying adult CM-1 decompression surgery performed at 2 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-1), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression, or laminectomy). Algorithm 2 restricted this group to patients with a primary diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. Among 340 first-time admissions identified by Algorithm 1, the overall PPV for CM-1 decompression was 65%. Among the 214 admissions identified by Algorithm 2, the overall PPV was 99.5%. The PPV for Algorithm 1 was lower in the Vanderbilt (59%) cohort, males (40%), and patients treated between 2009 and 2013 (57%), whereas the PPV of Algorithm 2 remained high (≥99%) across subgroups. The sensitivity of Algorithms 1 (86%) and 2 (83%) were above 75% in all subgroups. ICD-9-CM code Algorithm 2 has excellent PPV and good sensitivity to identify adult CM-1 decompression surgery. These results lay the foundation for studying CM-1 treatment outcomes by using large administrative databases.
ESTEST: A Framework for the Verification and Validation of Electronic Structure Codes
NASA Astrophysics Data System (ADS)
Yuan, Gary; Gygi, Francois
2011-03-01
ESTEST is a verification and validation (V& V) framework for electronic structure codes that supports Qbox, Quantum Espresso, ABINIT, the Exciting Code and plans support for many more. We discuss various approaches to the electronic structure V& V problem implemented in ESTEST, that are related to parsing, formats, data management, search, comparison and analyses. Additionally, an early experiment in the distribution of V& V ESTEST servers among the electronic structure community will be presented. Supported by NSF-OCI 0749217 and DOE FC02-06ER25777.
Supersonic Coaxial Jet Experiment for CFD Code Validation
NASA Technical Reports Server (NTRS)
Cutler, A. D.; Carty, A. A.; Doerner, S. E.; Diskin, G. S.; Drummond, J. P.
1999-01-01
A supersonic coaxial jet facility has been designed to provide experimental data suitable for the validation of CFD codes used to analyze high-speed propulsion flows. The center jet is of a light gas and the coflow jet is of air, and the mixing layer between them is compressible. Various methods have been employed in characterizing the jet flow field, including schlieren visualization, pitot, total temperature and gas sampling probe surveying, and RELIEF velocimetry. A Navier-Stokes code has been used to calculate the nozzle flow field and the results compared to the experiment.
Methodology, status and plans for development and assessment of Cathare code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestion, D.; Barre, F.; Faydide, B.
1997-07-01
This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less
Accuracy of external cause-of-injury coding in VA polytrauma patient discharge records.
Carlson, Kathleen F; Nugent, Sean M; Grill, Joseph; Sayer, Nina A
2010-01-01
Valid and efficient methods of identifying the etiology of treated injuries are critical for characterizing patient populations and developing prevention and rehabilitation strategies. We examined the accuracy of external cause-of-injury codes (E-codes) in Veterans Health Administration (VHA) administrative data for a population of injured patients. Chart notes and E-codes were extracted for 566 patients treated at any one of four VHA Polytrauma Rehabilitation Center sites between 2001 and 2006. Two expert coders, blinded to VHA E-codes, used chart notes to assign "gold standard" E-codes to injured patients. The accuracy of VHA E-coding was examined based on these gold standard E-codes. Only 382 of 517 (74%) injured patients were assigned E-codes in VHA records. Sensitivity of VHA E-codes varied significantly by site (range: 59%-91%, p < 0.001). Sensitivity was highest for combat-related injuries (81%) and lowest for fall-related injuries (60%). Overall specificity of E-codes was high (92%). E-coding accuracy was markedly higher when we restricted analyses to records that had been assigned VHA E-codes. E-codes may not be valid for ascertaining source-of-injury data for all injuries among VHA rehabilitation inpatients at this time. Enhanced training and policies may ensure more widespread, standardized use and accuracy of E-codes for injured veterans treated in the VHA.
TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moridis, G.J.; Pruess
1992-11-01
The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for propermore » applications of TOUGH and related codes.« less
NASA Radiation Protection Research for Exploration Missions
NASA Technical Reports Server (NTRS)
Wilson, John W.; Cucinotta, Francis A.; Tripathi, Ram K.; Heinbockel, John H.; Tweed, John; Mertens, Christopher J.; Walker, Steve A.; Blattnig, Steven R.; Zeitlin, Cary J.
2006-01-01
The HZETRN code was used in recent trade studies for renewed lunar exploration and currently used in engineering development of the next generation of space vehicles, habitats, and EVA equipment. A new version of the HZETRN code capable of simulating high charge and energy (HZE) ions, light-ions and neutrons with either laboratory or space boundary conditions with enhanced neutron and light-ion propagation is under development. Atomic and nuclear model requirements to support that development will be discussed. Such engineering design codes require establishing validation processes using laboratory ion beams and space flight measurements in realistic geometries. We discuss limitations of code validation due to the currently available data and recommend priorities for new data sets.
Validation of Multitemperature Nozzle Flow Code
NASA Technical Reports Server (NTRS)
Park, Chul; Lee, Seung -Ho.
1994-01-01
A computer code nozzle in n-temperatures (NOZNT), which calculates one-dimensional flows of partially dissociated and ionized air in an expanding nozzle, is tested against three existing sets of experimental data taken in arcjet wind tunnels. The code accounts for the differences among various temperatures, i.e., translational-rotational temperature, vibrational temperatures of individual molecular species, and electron-electronic temperature, and the effects of impurities. The experimental data considered are (1) the spectroscopic emission data; (2) electron beam data on vibrational temperature; and (3) mass-spectrometric species concentration data. It is shown that the impurities are inconsequential for the arcjet flows, and the NOZNT code is validated by numerically reproducing the experimental data.
Mahajan, Anubha; Wessel, Jennifer; Willems, Sara M; Zhao, Wei; Robertson, Neil R; Chu, Audrey Y; Gan, Wei; Kitajima, Hidetoshi; Taliun, Daniel; Rayner, N William; Guo, Xiuqing; Lu, Yingchang; Li, Man; Jensen, Richard A; Hu, Yao; Huo, Shaofeng; Lohman, Kurt K; Zhang, Weihua; Cook, James P; Prins, Bram Peter; Flannick, Jason; Grarup, Niels; Trubetskoy, Vassily Vladimirovich; Kravic, Jasmina; Kim, Young Jin; Rybin, Denis V; Yaghootkar, Hanieh; Müller-Nurasyid, Martina; Meidtner, Karina; Li-Gao, Ruifang; Varga, Tibor V; Marten, Jonathan; Li, Jin; Smith, Albert Vernon; An, Ping; Ligthart, Symen; Gustafsson, Stefan; Malerba, Giovanni; Demirkan, Ayse; Tajes, Juan Fernandez; Steinthorsdottir, Valgerdur; Wuttke, Matthias; Lecoeur, Cécile; Preuss, Michael; Bielak, Lawrence F; Graff, Marielisa; Highland, Heather M; Justice, Anne E; Liu, Dajiang J; Marouli, Eirini; Peloso, Gina Marie; Warren, Helen R; Afaq, Saima; Afzal, Shoaib; Ahlqvist, Emma; Almgren, Peter; Amin, Najaf; Bang, Lia B; Bertoni, Alain G; Bombieri, Cristina; Bork-Jensen, Jette; Brandslund, Ivan; Brody, Jennifer A; Burtt, Noël P; Canouil, Mickaël; Chen, Yii-Der Ida; Cho, Yoon Shin; Christensen, Cramer; Eastwood, Sophie V; Eckardt, Kai-Uwe; Fischer, Krista; Gambaro, Giovanni; Giedraitis, Vilmantas; Grove, Megan L; de Haan, Hugoline G; Hackinger, Sophie; Hai, Yang; Han, Sohee; Tybjærg-Hansen, Anne; Hivert, Marie-France; Isomaa, Bo; Jäger, Susanne; Jørgensen, Marit E; Jørgensen, Torben; Käräjämäki, Annemari; Kim, Bong-Jo; Kim, Sung Soo; Koistinen, Heikki A; Kovacs, Peter; Kriebel, Jennifer; Kronenberg, Florian; Läll, Kristi; Lange, Leslie A; Lee, Jung-Jin; Lehne, Benjamin; Li, Huaixing; Lin, Keng-Hung; Linneberg, Allan; Liu, Ching-Ti; Liu, Jun; Loh, Marie; Mägi, Reedik; Mamakou, Vasiliki; McKean-Cowdin, Roberta; Nadkarni, Girish; Neville, Matt; Nielsen, Sune F; Ntalla, Ioanna; Peyser, Patricia A; Rathmann, Wolfgang; Rice, Kenneth; Rich, Stephen S; Rode, Line; Rolandsson, Olov; Schönherr, Sebastian; Selvin, Elizabeth; Small, Kerrin S; Stančáková, Alena; Surendran, Praveen; Taylor, Kent D; Teslovich, Tanya M; Thorand, Barbara; Thorleifsson, Gudmar; Tin, Adrienne; Tönjes, Anke; Varbo, Anette; Witte, Daniel R; Wood, Andrew R; Yajnik, Pranav; Yao, Jie; Yengo, Loïc; Young, Robin; Amouyel, Philippe; Boeing, Heiner; Boerwinkle, Eric; Bottinger, Erwin P; Chowdhury, Rajiv; Collins, Francis S; Dedoussis, George; Dehghan, Abbas; Deloukas, Panos; Ferrario, Marco M; Ferrières, Jean; Florez, Jose C; Frossard, Philippe; Gudnason, Vilmundur; Harris, Tamara B; Heckbert, Susan R; Howson, Joanna M M; Ingelsson, Martin; Kathiresan, Sekar; Kee, Frank; Kuusisto, Johanna; Langenberg, Claudia; Launer, Lenore J; Lindgren, Cecilia M; Männistö, Satu; Meitinger, Thomas; Melander, Olle; Mohlke, Karen L; Moitry, Marie; Morris, Andrew D; Murray, Alison D; de Mutsert, Renée; Orho-Melander, Marju; Owen, Katharine R; Perola, Markus; Peters, Annette; Province, Michael A; Rasheed, Asif; Ridker, Paul M; Rivadineira, Fernando; Rosendaal, Frits R; Rosengren, Anders H; Salomaa, Veikko; Sheu, Wayne H-H; Sladek, Rob; Smith, Blair H; Strauch, Konstantin; Uitterlinden, André G; Varma, Rohit; Willer, Cristen J; Blüher, Matthias; Butterworth, Adam S; Chambers, John Campbell; Chasman, Daniel I; Danesh, John; van Duijn, Cornelia; Dupuis, Josée; Franco, Oscar H; Franks, Paul W; Froguel, Philippe; Grallert, Harald; Groop, Leif; Han, Bok-Ghee; Hansen, Torben; Hattersley, Andrew T; Hayward, Caroline; Ingelsson, Erik; Kardia, Sharon L R; Karpe, Fredrik; Kooner, Jaspal Singh; Köttgen, Anna; Kuulasmaa, Kari; Laakso, Markku; Lin, Xu; Lind, Lars; Liu, Yongmei; Loos, Ruth J F; Marchini, Jonathan; Metspalu, Andres; Mook-Kanamori, Dennis; Nordestgaard, Børge G; Palmer, Colin N A; Pankow, James S; Pedersen, Oluf; Psaty, Bruce M; Rauramaa, Rainer; Sattar, Naveed; Schulze, Matthias B; Soranzo, Nicole; Spector, Timothy D; Stefansson, Kari; Stumvoll, Michael; Thorsteinsdottir, Unnur; Tuomi, Tiinamaija; Tuomilehto, Jaakko; Wareham, Nicholas J; Wilson, James G; Zeggini, Eleftheria; Scott, Robert A; Barroso, Inês; Frayling, Timothy M; Goodarzi, Mark O; Meigs, James B; Boehnke, Michael; Saleheen, Danish; Morris, Andrew P; Rotter, Jerome I; McCarthy, Mark I
2018-04-01
We aggregated coding variant data for 81,412 type 2 diabetes cases and 370,832 controls of diverse ancestry, identifying 40 coding variant association signals (P < 2.2 × 10 -7 ); of these, 16 map outside known risk-associated loci. We make two important observations. First, only five of these signals are driven by low-frequency variants: even for these, effect sizes are modest (odds ratio ≤1.29). Second, when we used large-scale genome-wide association data to fine-map the associated variants in their regional context, accounting for the global enrichment of complex trait associations in coding sequence, compelling evidence for coding variant causality was obtained for only 16 signals. At 13 others, the associated coding variants clearly represent 'false leads' with potential to generate erroneous mechanistic inference. Coding variant associations offer a direct route to biological insight for complex diseases and identification of validated therapeutic targets; however, appropriate mechanistic inference requires careful specification of their causal contribution to disease predisposition.
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David
2002-01-01
A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Toward Supersonic Retropropulsion CFD Validation
NASA Technical Reports Server (NTRS)
Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl
2011-01-01
This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.
Validation of a multi-layer Green's function code for ion beam transport
NASA Astrophysics Data System (ADS)
Walker, Steven; Tweed, John; Tripathi, Ram; Badavi, Francis F.; Miller, Jack; Zeitlin, Cary; Heilbronn, Lawrence
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiations is needed. In consequence, a new version of the HZETRN code capable of simulating high charge and energy (HZE) ions with either laboratory or space boundary conditions is currently under development. The new code, GRNTRN, is based on a Green's function approach to the solution of Boltzmann's transport equation and like its predecessor is deterministic in nature. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and down shift. Code validation in the laboratory environment is addressed by showing that GRNTRN accurately predicts energy loss spectra as measured by solid-state detectors in ion beam experiments with multi-layer targets. In order to validate the code with space boundary conditions, measured particle fluences are propagated through several thicknesses of shielding using both GRNTRN and the current version of HZETRN. The excellent agreement obtained indicates that GRNTRN accurately models the propagation of HZE ions in the space environment as well as in laboratory settings and also provides verification of the HZETRN propagator.
Examples of Use of SINBAD Database for Nuclear Data and Code Validation
NASA Astrophysics Data System (ADS)
Kodeli, Ivan; Žerovnik, Gašper; Milocco, Alberto
2017-09-01
The SINBAD database currently contains compilations and evaluations of over 100 shielding benchmark experiments. The SINBAD database is widely used for code and data validation. Materials covered include: Air, N. O, H2O, Al, Be, Cu, graphite, concrete, Fe, stainless steel, Pb, Li, Ni, Nb, SiC, Na, W, V and mixtures thereof. Over 40 organisations from 14 countries and 2 international organisations have contributed data and work in support of SINBAD. Examples of the use of the database in the scope of different international projects, such as the Working Party on Evaluation Cooperation of the OECD and the European Fusion Programme demonstrate the merit and possible usage of the database for the validation of modern nuclear data evaluations and new computer codes.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2009-01-01
In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.
Language Recognition via Sparse Coding
2016-09-08
a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector
Network Coded Cooperative Communication in a Real-Time Wireless Hospital Sensor Network.
Prakash, R; Balaji Ganesh, A; Sivabalan, Somu
2017-05-01
The paper presents a network coded cooperative communication (NC-CC) enabled wireless hospital sensor network architecture for monitoring health as well as postural activities of a patient. A wearable device, referred as a smartband is interfaced with pulse rate, body temperature sensors and an accelerometer along with wireless protocol services, such as Bluetooth and Radio-Frequency transceiver and Wi-Fi. The energy efficiency of wearable device is improved by embedding a linear acceleration based transmission duty cycling algorithm (NC-DRDC). The real-time demonstration is carried-out in a hospital environment to evaluate the performance characteristics, such as power spectral density, energy consumption, signal to noise ratio, packet delivery ratio and transmission offset. The resource sharing and energy efficiency features of network coding technique are improved by proposing an algorithm referred as network coding based dynamic retransmit/rebroadcast decision control (LA-TDC). From the experimental results, it is observed that the proposed LA-TDC algorithm reduces network traffic and end-to-end delay by an average of 27.8% and 21.6%, respectively than traditional network coded wireless transmission. The wireless architecture is deployed in a hospital environment and results are then successfully validated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dartevelle, Sebastian
2007-10-01
Large-scale volcanic eruptions are hazardous events that cannot be described by detailed and accurate in situ measurement: hence, little to no real-time data exists to rigorously validate current computer models of these events. In addition, such phenomenology involves highly complex, nonlinear, and unsteady physical behaviors upon many spatial and time scales. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics, and inadequately constrained in terms of initial, boundary, and inflow conditions. Nevertheless, code verification and validation become even more critical because more and more volcanologists use numerical data for assessment and mitigation of volcanic hazards.more » In this report, we evaluate the process of model and code development in the context of geophysical multiphase flows. We describe: (1) the conception of a theoretical, multiphase, Navier-Stokes model, (2) its implementation into a numerical code, (3) the verification of the code, and (4) the validation of such a model within the context of turbulent and underexpanded jet physics. Within the validation framework, we suggest focusing on the key physics that control the volcanic clouds—namely, momentum-driven supersonic jet and buoyancy-driven turbulent plume. For instance, we propose to compare numerical results against a set of simple and well-constrained analog experiments, which uniquely and unambiguously represent each of the key-phenomenology. Key« less
Tan, Michael; Wilson, Ian; Braganza, Vanessa; Ignatiadis, Sophia; Boston, Ray; Sundararajan, Vijaya; Cook, Mark J; D'Souza, Wendyl J
2015-10-01
We report the diagnostic validity of a selection algorithm for identifying epilepsy cases. Retrospective validation study of International Classification of Diseases 10th Revision Australian Modification (ICD-10AM)-coded hospital records and pharmaceutical data sampled from 300 consecutive potential epilepsy-coded cases and 300 randomly chosen cases without epilepsy from 3/7/2012 to 10/7/2013. Two epilepsy specialists independently validated the diagnosis of epilepsy. A multivariable logistic regression model was fitted to identify the optimum coding algorithm for epilepsy and was internally validated. One hundred fifty-eight out of three hundred (52.6%) epilepsy-coded records and 0/300 (0%) nonepilepsy records were confirmed to have epilepsy. The kappa for interrater agreement was 0.89 (95% CI=0.81-0.97). The model utilizing epilepsy (G40), status epilepticus (G41) and ≥1 antiepileptic drug (AED) conferred the highest positive predictive value of 81.4% (95% CI=73.1-87.9) and a specificity of 99.9% (95% CI=99.9-100.0). The area under the receiver operating curve was 0.90 (95% CI=0.88-0.93). When combined with pharmaceutical data, the precision of case identification for epilepsy data linkage design was considerably improved and could provide considerable potential for efficient and reasonably accurate case ascertainment in epidemiological studies. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.
2014-01-01
The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.
2014-01-01
The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (1) mode blockage, (2) liner insertion loss, (3) short ducts, and (4) mode reflection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; MacQuigg, Michael Robert; Wysong, Andrew Russell
In this document, the code MCNP is validated with ENDF/B-VII.1 cross section data under the purview of ANSI/ANS-8.24-2007, for use with uranium systems. MCNP is a computer code based on Monte Carlo transport methods. While MCNP has wide reading capability in nuclear transport simulation, this validation is limited to the functionality related to neutron transport and calculation of criticality parameters such as k eff.
Li, Tingwen; Rogers, William A.; Syamlal, Madhava; ...
2016-07-29
Here, the MFiX suite of multiphase computational fluid dynamics (CFD) codes is being developed at U.S. Department of Energy's National Energy Technology Laboratory (NETL). It includes several different approaches to multiphase simulation: MFiX-TFM, a two-fluid (Eulerian–Eulerian) model; MFiX-DEM, an Eulerian fluid model with a Lagrangian Discrete Element Model for the solids phase; and MFiX-PIC, Eulerian fluid model with Lagrangian particle ‘parcels’ representing particle groups. These models are undergoing continuous development and application, with verification, validation, and uncertainty quantification (VV&UQ) as integrated activities. After a brief summary of recent progress in the verification, validation and uncertainty quantification (VV&UQ), this article highlightsmore » two recent accomplishments in the application of MFiX-TFM to fossil energy technology development. First, recent application of MFiX to the pilot-scale KBR TRIG™ Transport Gasifier located at DOE's National Carbon Capture Center (NCCC) is described. Gasifier performance over a range of operating conditions was modeled and compared to NCCC operational data to validate the ability of the model to predict parametric behavior. Second, comparison of code predictions at a detailed fundamental scale is presented studying solid sorbents for the post-combustion capture of CO 2 from flue gas. Specifically designed NETL experiments are being used to validate hydrodynamics and chemical kinetics for the sorbent-based carbon capture process.« less
NASA Astrophysics Data System (ADS)
Muraro, S.; Battistoni, G.; Belcari, N.; Bisogni, M. G.; Camarlinghi, N.; Cristoforetti, L.; Del Guerra, A.; Ferrari, A.; Fracchiolla, F.; Morrocchi, M.; Righetto, R.; Sala, P.; Schwarz, M.; Sportelli, G.; Topi, A.; Rosso, V.
2017-12-01
Ion beam irradiations can deliver conformal dose distributions minimizing damage to healthy tissues thanks to their characteristic dose profiles. Nevertheless, the location of the Bragg peak can be affected by different sources of range uncertainties: a critical issue is the treatment verification. During the treatment delivery, nuclear interactions between the ions and the irradiated tissues generate β+ emitters: the detection of this activity signal can be used to perform the treatment monitoring if an expected activity distribution is available for comparison. Monte Carlo (MC) codes are widely used in the particle therapy community to evaluate the radiation transport and interaction with matter. In this work, FLUKA MC code was used to simulate the experimental conditions of irradiations performed at the Proton Therapy Center in Trento (IT). Several mono-energetic pencil beams were delivered on phantoms mimicking human tissues. The activity signals were acquired with a PET system (DoPET) based on two planar heads, and designed to be installed along the beam line to acquire data also during the irradiation. Different acquisitions are analyzed and compared with the MC predictions, with a special focus on validating the PET detectors response for activity range verification.
Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark
Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens
2013-01-01
AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628
NASA Astrophysics Data System (ADS)
Tessonnier, T.; Mairani, A.; Brons, S.; Sala, P.; Cerutti, F.; Ferrari, A.; Haberer, T.; Debus, J.; Parodi, K.
2017-08-01
In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.
Tessonnier, T; Mairani, A; Brons, S; Sala, P; Cerutti, F; Ferrari, A; Haberer, T; Debus, J; Parodi, K
2017-08-01
In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4 He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2017-04-01
This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less
The Design of PSB-VVER Experiments Relevant to Accident Management
NASA Astrophysics Data System (ADS)
Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander
Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.
Reliability and Validity of the Dyadic Observed Communication Scale (DOCS).
Hadley, Wendy; Stewart, Angela; Hunter, Heather L; Affleck, Katelyn; Donenberg, Geri; Diclemente, Ralph; Brown, Larry K
2013-02-01
We evaluated the reliability and validity of the Dyadic Observed Communication Scale (DOCS) coding scheme, which was developed to capture a range of communication components between parents and adolescents. Adolescents and their caregivers were recruited from mental health facilities for participation in a large, multi-site family-based HIV prevention intervention study. Seventy-one dyads were randomly selected from the larger study sample and coded using the DOCS at baseline. Preliminary validity and reliability of the DOCS was examined using various methods, such as comparing results to self-report measures and examining interrater reliability. Results suggest that the DOCS is a reliable and valid measure of observed communication among parent-adolescent dyads that captures both verbal and nonverbal communication behaviors that are typical intervention targets. The DOCS is a viable coding scheme for use by researchers and clinicians examining parent-adolescent communication. Coders can be trained to reliably capture individual and dyadic components of communication for parents and adolescents and this complex information can be obtained relatively quickly.
Aerothermal Assment Of The Expert Flap In The SCIROCCO Wind Tunnel
NASA Astrophysics Data System (ADS)
Walpot, L.; Di Clemente, M.; Vos, J.; Etchells, J.; Trifoni, E.; Thoemel, J.; Gavira, J.
2011-05-01
In the frame of the “In-Flight Test Measurement Techniques for Aerothermodynamics” activity of the EXPERT Program, the EXPERT Instrumented Open Flap Assembly experiment has the objective to verify the design/sensor integration and validate the CFD tools. Ground based measurements were made in Europe’s largest high enthalpy plasma facility, Scirocco in Italy. Two EXPERT flaps of the flight article, instrumented with 14 thermocouples, 5 pressure ports, a pyrometer and an IR camera mounted in the cavity instrumented flap will collect in-flight data. During the Scirocco experiment, an EXPERT flap model identical to the flight article was mounted at 45 deg on a holder including cavity and was subjected to a hot plasma flow at an enthalpy up to 11MJ/kg at a stagnation pressure of 7 bar. The test model sports the same pressure sensors as the flight article. Hypersonic state-of-the-art codes were then be used to perform code-to-code and wind tunnel-to-code comparisons, including thermal response of the flap as collected during the tests by the sensors and camera.
New schemes for internally contracted multi-reference configuration interaction
NASA Astrophysics Data System (ADS)
Wang, Yubin; Han, Huixian; Lei, Yibo; Suo, Bingbing; Zhu, Haiyan; Song, Qi; Wen, Zhenyi
2014-10-01
In this work we present a new internally contracted multi-reference configuration interaction (MRCI) scheme by applying the graphical unitary group approach and the hole-particle symmetry. The latter allows a Distinct Row Table (DRT) to split into a number of sub-DRTs in the active space. In the new scheme a contraction is defined as a linear combination of arcs within a sub-DRT, and connected to the head and tail of the DRT through up-steps and down-steps to generate internally contracted configuration functions. The new scheme deals with the closed-shell (hole) orbitals and external orbitals in the same manner and thus greatly simplifies calculations of coupling coefficients and CI matrix elements. As a result, the number of internal orbitals is no longer a bottleneck of MRCI calculations. The validity and efficiency of the new ic-MRCI code are tested by comparing with the corresponding WK code of the MOLPRO package. The energies obtained from the two codes are essentially identical, and the computational efficiencies of the two codes have their own advantages.
Computational Fluid Dynamics Technology for Hypersonic Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2003-01-01
Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Marshall, William BJ J
In the course of criticality code validation, outlier cases are frequently encountered. Historically, the causes of these unexpected results could be diagnosed only through comparison with other similar cases or through the known presence of a unique component of the critical experiment. The sensitivity and uncertainty (S/U) analysis tools available in the SCALE 6.1 code system provide a much broader range of options to examine underlying causes of outlier cases. This paper presents some case studies performed as a part of the recent validation of the KENO codes in SCALE 6.1 using S/U tools to examine potential causes of biases.
Monte Carlo calculations of positron emitter yields in proton radiotherapy.
Seravalli, E; Robert, C; Bauer, J; Stichelbaut, F; Kurz, C; Smeets, J; Van Ngoc Ty, C; Schaart, D R; Buvat, I; Parodi, K; Verhaegen, F
2012-03-21
Positron emission tomography (PET) is a promising tool for monitoring the three-dimensional dose distribution in charged particle radiotherapy. PET imaging during or shortly after proton treatment is based on the detection of annihilation photons following the ß(+)-decay of radionuclides resulting from nuclear reactions in the irradiated tissue. Therapy monitoring is achieved by comparing the measured spatial distribution of irradiation-induced ß(+)-activity with the predicted distribution based on the treatment plan. The accuracy of the calculated distribution depends on the correctness of the computational models, implemented in the employed Monte Carlo (MC) codes that describe the interactions of the charged particle beam with matter and the production of radionuclides and secondary particles. However, no well-established theoretical models exist for predicting the nuclear interactions and so phenomenological models are typically used based on parameters derived from experimental data. Unfortunately, the experimental data presently available are insufficient to validate such phenomenological hadronic interaction models. Hence, a comparison among the models used by the different MC packages is desirable. In this work, starting from a common geometry, we compare the performances of MCNPX, GATE and PHITS MC codes in predicting the amount and spatial distribution of proton-induced activity, at therapeutic energies, to the already experimentally validated PET modelling based on the FLUKA MC code. In particular, we show how the amount of ß(+)-emitters produced in tissue-like media depends on the physics model and cross-sectional data used to describe the proton nuclear interactions, thus calling for future experimental campaigns aiming at supporting improvements of MC modelling for clinical application of PET monitoring. © 2012 Institute of Physics and Engineering in Medicine
Validity of administrative coding in identifying patients with upper urinary tract calculi.
Semins, Michelle J; Trock, Bruce J; Matlaga, Brian R
2010-07-01
Administrative databases are increasingly used for epidemiological investigations. We performed a study to assess the validity of ICD-9 codes for upper urinary tract stone disease in an administrative database. We retrieved the records of all inpatients and outpatients at Johns Hopkins Hospital between November 2007 and October 2008 with an ICD-9 code of 592, 592.0, 592.1 or 592.9 as one of the first 3 diagnosis codes. A random number generator selected 100 encounters for further review. We considered a patient to have a true diagnosis of an upper tract stone if the medical records specifically referenced a kidney stone event, or included current or past treatment for a kidney stone. Descriptive and comparative analyses were performed. A total of 8,245 encounters coded as upper tract calculus were identified and 100 were randomly selected for review. Two patients could not be identified within the electronic medical record and were excluded from the study. The positive predictive value of using all ICD-9 codes for an upper tract calculus (592, 592.0, 592.1) to identify subjects with renal or ureteral stones was 95.9%. For 592.0 only the positive predictive value was 85%. However, although the positive predictive value for 592.1 only was 100%, 26 subjects (76%) with a ureteral stone were not appropriately billed with this code. ICD-9 coding for urinary calculi is likely to be sufficiently valid to be useful in studies using administrative data to analyze stone disease. However, ICD-9 coding is not a reliable means to distinguish between subjects with renal and ureteral calculi. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Pang, Jack X Q; Ross, Erin; Borman, Meredith A; Zimmer, Scott; Kaplan, Gilaad G; Heitman, Steven J; Swain, Mark G; Burak, Kelly W; Quan, Hude; Myers, Robert P
2015-09-11
Epidemiologic studies of alcoholic hepatitis (AH) have been hindered by the lack of a validated International Classification of Disease (ICD) coding algorithm for use with administrative data. Our objective was to validate coding algorithms for AH using a hospitalization database. The Hospital Discharge Abstract Database (DAD) was used to identify consecutive adults (≥18 years) hospitalized in the Calgary region with a diagnosis code for AH (ICD-10, K70.1) between 01/2008 and 08/2012. Medical records were reviewed to confirm the diagnosis of AH, defined as a history of heavy alcohol consumption, elevated AST and/or ALT (<300 U/L), serum bilirubin >34 μmol/L, and elevated INR. Subgroup analyses were performed according to the diagnosis field in which the code was recorded (primary vs. secondary) and AH severity. Algorithms that incorporated ICD-10 codes for cirrhosis and its complications were also examined. Of 228 potential AH cases, 122 patients had confirmed AH, corresponding to a positive predictive value (PPV) of 54% (95% CI 47-60%). PPV improved when AH was the primary versus a secondary diagnosis (67% vs. 21%; P < 0.001). Algorithms that included diagnosis codes for ascites (PPV 75%; 95% CI 63-86%), cirrhosis (PPV 60%; 47-73%), and gastrointestinal hemorrhage (PPV 62%; 51-73%) had improved performance, however, the prevalence of these diagnoses in confirmed AH cases was low (29-39%). In conclusion the low PPV of the diagnosis code for AH suggests that caution is necessary if this hospitalization database is used in large-scale epidemiologic studies of this condition.
Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations
Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia
2016-01-01
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. PMID:26545618
Radiation Protection Studies for Medical Particle Accelerators using Fluka Monte Carlo Code.
Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario
2017-04-01
Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of 41Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
FY2017 Pilot Project Plan for the Nuclear Energy Knowledge and Validation Center Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Weiju
To prepare for technical development of computational code validation under the Nuclear Energy Knowledge and Validation Center (NEKVAC) initiative, several meetings were held by a group of experts of the Idaho National Laboratory (INL) and the Oak Ridge National Laboratory (ORNL) to develop requirements of, and formulate a structure for, a transient fuel database through leveraging existing resources. It was concluded in discussions of these meetings that a pilot project is needed to address the most fundamental issues that can generate immediate stimulus to near-future validation developments as well as long-lasting benefits to NEKVAC operation. The present project is proposedmore » based on the consensus of these discussions. Analysis of common scenarios in code validation indicates that the incapability of acquiring satisfactory validation data is often a showstopper that must first be tackled before any confident validation developments can be carried out. Validation data are usually found scattered in different places most likely with interrelationships among the data not well documented, incomplete with information for some parameters missing, nonexistent, or unrealistic to experimentally generate. Furthermore, with very different technical backgrounds, the modeler, the experimentalist, and the knowledgebase developer that must be involved in validation data development often cannot communicate effectively without a data package template that is representative of the data structure for the information domain of interest to the desired code validation. This pilot project is proposed to use the legendary TREAT Experiments Database to provide core elements for creating an ideal validation data package. Data gaps and missing data interrelationships will be identified from these core elements. All the identified missing elements will then be filled in with experimental data if available from other existing sources or with dummy data if nonexistent. The resulting hybrid validation data package (composed of experimental and dummy data) will provide a clear and complete instance delineating the structure of the desired validation data and enabling effective communication among the modeler, the experimentalist, and the knowledgebase developer. With a good common understanding of the desired data structure by the three parties of subject matter experts, further existing data hunting will be effectively conducted, new experimental data generation will be realistically pursued, knowledgebase schema will be practically designed; and code validation will be confidently planned.« less
Validation of a common data model for active safety surveillance research
Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E
2011-01-01
Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893
Calderwood, Michael S.; Kleinman, Ken; Murphy, Michael V.; Platt, Richard; Huang, Susan S.
2014-01-01
Background Deep and organ/space surgical site infections (D/OS SSI) cause significant morbidity, mortality, and costs. Rates are publicly reported and increasingly used as quality metrics affecting hospital payment. Lack of standardized surveillance methods threaten the accuracy of reported data and decrease confidence in comparisons based upon these data. Methods We analyzed data from national validation studies that used Medicare claims to trigger chart review for SSI confirmation after coronary artery bypass graft surgery (CABG) and hip arthroplasty. We evaluated code performance (sensitivity and positive predictive value) to select diagnosis codes that best identified D/OS SSI. Codes were analyzed individually and in combination. Results Analysis included 143 patients with D/OS SSI after CABG and 175 patients with D/OS SSI after hip arthroplasty. For CABG, 9 International Classification of Diseases, 9th Revision (ICD-9) diagnosis codes identified 92% of D/OS SSI, with 1 D/OS SSI identified for every 4 cases with a diagnosis code. For hip arthroplasty, 6 ICD-9 diagnosis codes identified 99% of D/OS SSI, with 1 D/OS SSI identified for every 2 cases with a diagnosis code. Conclusions This standardized and efficient approach for identifying D/OS SSI can be used by hospitals to improve case detection and public reporting. This method can also be used to identify potential D/OS SSI cases for review during hospital audits for data validation. PMID:25734174
Natural language processing of clinical notes for identification of critical limb ischemia.
Afzal, Naveed; Mallipeddi, Vishnu Priya; Sohn, Sunghwan; Liu, Hongfang; Chaudhry, Rajeev; Scott, Christopher G; Kullo, Iftikhar J; Arruda-Olson, Adelaide M
2018-03-01
Critical limb ischemia (CLI) is a complication of advanced peripheral artery disease (PAD) with diagnosis based on the presence of clinical signs and symptoms. However, automated identification of cases from electronic health records (EHRs) is challenging due to absence of a single definitive International Classification of Diseases (ICD-9 or ICD-10) code for CLI. In this study, we extend a previously validated natural language processing (NLP) algorithm for PAD identification to develop and validate a subphenotyping NLP algorithm (CLI-NLP) for identification of CLI cases from clinical notes. We compared performance of the CLI-NLP algorithm with CLI-related ICD-9 billing codes. The gold standard for validation was human abstraction of clinical notes from EHRs. Compared to billing codes the CLI-NLP algorithm had higher positive predictive value (PPV) (CLI-NLP 96%, billing codes 67%, p < 0.001), specificity (CLI-NLP 98%, billing codes 74%, p < 0.001) and F1-score (CLI-NLP 90%, billing codes 76%, p < 0.001). The sensitivity of these two methods was similar (CLI-NLP 84%; billing codes 88%; p < 0.12). The CLI-NLP algorithm for identification of CLI from narrative clinical notes in an EHR had excellent PPV and has potential for translation to patient care as it will enable automated identification of CLI cases for quality projects, clinical decision support tools and support a learning healthcare system. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Tamaki, S; Sakai, M; Yoshihashi, S; Manabe, M; Zushi, N; Murata, I; Hoashi, E; Kato, I; Kuri, S; Oshiro, S; Nagasaki, M; Horiike, H
2015-12-01
Mock-up experiment for development of accelerator based neutron source for Osaka University BNCT project was carried out at Birmingham University, UK. In this paper, spatial distribution of neutron flux intensity was evaluated by foil activation method. Validity of the design code system was confirmed by comparing measured gold foil activities with calculations. As a result, it was found that the epi-thermal neutron beam was well collimated by our neutron moderator assembly. Also, the design accuracy was evaluated to have less than 20% error. Copyright © 2015 Elsevier Ltd. All rights reserved.
VLF Trimpi modelling on the path NWC-Dunedin using both finite element and 3D Born modelling
NASA Astrophysics Data System (ADS)
Nunn, D.; Hayakawa, K. B. M.
1998-10-01
This paper investigates the numerical modelling of VLF Trimpis, produced by a D region inhomogeneity on the great circle path. Two different codes are used to model Trimpis on the path NWC-Dunedin. The first is a 2D Finite Element Method Code (FEM), whose solutions are rigorous and valid in the strong scattering or non-Born limit. The second code is a 3D model that invokes the Born approximation. The predicted Trimpis from these codes compare very closely, thus confirming the validity of both models. The modal scattering matrices for both codes are analysed in some detail and are found to have a comparable structure. They indicate strong scattering between the dominant TM modes. Analysis of the scattering matrix from the FEM code shows that departure from linear Born behaviour occurs when the inhomogeneity has a horizontal scale size of about 100 km and a maximum electron density enhancement at 75 km altitude of about 6 electrons.
A measure of short-term visual memory based on the WISC-R coding subtest.
Collaer, M L; Evans, J R
1982-07-01
Adapted the Coding subtest of the WISC-R to provide a measure of visual memory. Three hundred and five children, aged 8 through 12, were administered the Coding test using standard directions. A few seconds after completion the key was taken away, and each was given a paper with only the digits and asked to write the appropriate matching symbol below each. This was termed "Coding Recall." To provide validity data, a subgroup of 50 Ss also was administered the Attention Span for Letters subtest from the Detroit Tests of Learning Aptitude (as a test of visual memory for sequences of letters) and a Bender Gestalt recall test (as a measure of visual memory for geometric forms). Coding Recall means and standard deviations are reported separately by sex and age level. Implications for clinicans are discussed. Reservations about clinical use of the data are given in view of the possible lack of representativeness of the sample used and the limited reliability and validity of Coding Recall.
2014-01-01
Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532
Development of the 3DHZETRN code for space radiation protection
NASA Astrophysics Data System (ADS)
Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert
Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.
Quantum Dense Coding About a Two-Qubit Heisenberg XYZ Model
NASA Astrophysics Data System (ADS)
Xu, Hui-Yun; Yang, Guo-Hui
2017-09-01
By taking into account the nonuniform magnetic field, the quantum dense coding with thermal entangled states of a two-qubit anisotropic Heisenberg XYZ chain are investigated in detail. We mainly show the different properties about the dense coding capacity ( χ) with the changes of different parameters. It is found that dense coding capacity χ can be enhanced by decreasing the magnetic field B, the degree of inhomogeneity b and temperature T, or increasing the coupling constant along z-axis J z . In addition, we also find χ remains the stable value as the change of the anisotropy of the XY plane Δ in a certain temperature condition. Through studying different parameters effect on χ, it presents that we can properly turn the values of B, b, J z , Δ or adjust the temperature T to obtain a valid dense coding capacity ( χ satisfies χ > 1). Moreover, the temperature plays a key role in adjusting the value of dense coding capacity χ. The valid dense coding capacity could be always obtained in the lower temperature-limit case.
Kluger, Michael D.; Sofair, Andre N.; Heye, Constance J.; Meek, James I.; Sodhi, Rajesh K.; Hadler, James L.
2001-01-01
Objectives. This study investigated retrospective validation of a prospective surveillance system for unexplained illness and death due to possibly infectious causes. Methods. A computerized search of hospital discharge data identified patients with potential unexplained illness and death due to possibly infectious causes. Medical records for such patients were reviewed for satisfaction of study criteria. Cases identified retrospectively were combined with prospectively identified cases to form a reference population against which sensitivity could be measured. Results. Retrospective validation was 41% sensitive, whereas prospective surveillance was 73% sensitive. The annual incidence of unexplained illness and death due to possibly infectious causes during 1995 and 1996 in the study county was conservatively estimated to range from 2.7 to 6.2 per 100 000 residents aged 1 to 49 years. Conclusions. Active prospective surveillance for unexplained illness and death due to possibly infectious causes is more sensitive than retrospective surveillance conducted through a published list of indicator codes. However, retrospective surveillance can be a feasible and much less labor-intensive alternative to active prospective surveillance when the latter is not possible or desired. PMID:11499106
Sokal, Brad; Uswatte, Gitendra; Barman, Joydip; Brewer, Michael; Byrom, Ezekiel; Latten, Jessica; Joseph, Jeethu; Serafim, Camila; Ghaffari, Touraj; Sarkar, Nilanjan
2014-03-01
To test the convergent validity of an objective method, Sensor-Enabled Radio-frequency Identification System for Monitoring Arm Activity (SERSMAA), that distinguishes between functional and nonfunctional activity. Cross-sectional study. Laboratory. Participants (N=25) were ≥0.2 years poststroke (median, 9) with a wide range of severity of upper-extremity hemiparesis. Not applicable. After stroke, laboratory tests of the motor capacity of the more-affected arm poorly predict spontaneous use of that arm in daily life. However, available subjective methods for measuring everyday arm use are vulnerable to self-report biases, whereas available objective methods only provide information on the amount of activity without regard to its relation with function. The SERSMAA consists of a proximity-sensor receiver on the more-affected arm and multiple units placed on objects. Functional activity is signaled when the more-affected arm is close to an object that is moved. Participants were videotaped during a laboratory simulation of an everyday activity, that is, setting a table with cups, bowls, and plates instrumented with transmitters. Observers independently coded the videos in 2-second blocks with a validated system for classifying more-affected arm activity. There was a strong correlation (r=.87, P<.001) between time that the more-affected arm was used for handling objects according to the SERSMAA and functional activity according to the observers. The convergent validity of SERSMAA for measuring more-affected arm functional activity after stroke was supported in a simulation of everyday activity. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
NASA Astrophysics Data System (ADS)
Greenwood, L. R.; Johnson, C. D.
2016-02-01
The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator workbook for use in correcting the measured activities. Output from the SigPhi Calculator is automatically produced, and consists of a portion of the STAYSL PNNL input file data that is required to run the spectral adjustment calculations. Within STAYSL PNNL, the least-squares process is performed in one step, without iteration, and provides rapid results on PC platforms. STAYSL PNNL creates multiple output files with tabulated results, data suitable for plotting, and data formatted for use in subsequent radiation damage calculations using the SPECTER computer code (which is not included in the STAYSL PNNL suite). All components of the software suite have undergone extensive testing and validation prior to release and test cases are provided with the package.
Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.
1990-01-01
Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less
Zhu, Yuanqi; States, J. Christopher; Wang, Yang; Hein, David W.
2011-01-01
BACKGROUND The functional effects of N-acetyltransferase 1 (NAT1) polymorphisms and haplotypes are poorly understood, compromising the validity of associations reported with diseases including birth defects and numerous cancers. METHODS We investigated the effects of genetic polymorphisms within the NAT1 coding region and the 3′-untranslated region (3′-UTR) and their associated haplotypes on N- and O-acetyltransferase catalytic activities, and NAT1 mRNA and protein levels following recombinant expression in COS-1 cells. RESULTS 1088T>A (rs1057126; 3′-UTR) and 1095C>A (rs15561; 3′-UTR) each slightly reduced NAT1 catalytic activity and NAT1 mRNA and protein levels. A 9-base pair (TAATAATAA) deletion between nucleotides 1065-1090 (3′-UTR) reduced NAT1 catalytic activity and NAT1 mRNA and protein levels. In contrast, a 445G>A (rs4987076; V149I), 459G>A (rs4986990; T153T), 640T>G (rs4986783; S214A) coding region haplotype present in NAT1*11 increased NAT1 catalytic activity and NAT1 protein, but not NAT1 mRNA levels. A combination of the 9-base pair (TAATAATAA) deletion and the 445G>A, 459G>A, 640T>G coding region haplotypes, both present in NAT1*11, appeared to neutralize the opposing effects on NAT1 protein and catalytic activity, resulting in levels of NAT1 protein and catalytic activity that did not differ significantly from the NAT1*4 reference. CONCLUSIONS Since 1095C>A (3′-UTR) is the sole polymorphism present in NAT1*3, our data suggests that NAT1*3 is not functionally equivalent to the NAT1*4 reference. Furthermore, our findings provide biological support for reported associations of 1088T>A and 1095C>A polymorphisms with birth defects. PMID:21290563
Validating LES for Jet Aeroacoustics
NASA Technical Reports Server (NTRS)
Bridges, James
2011-01-01
Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that result in having dreams come true. This paper primarily addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. It also addresses the latter problem in discussing what are relevant measures critical for aeroacoustics that should be used in validating LES codes. These new diagnostic techniques deliver measurements and flow statistics of increasing sophistication and capability, but what of their accuracy? And what are the measures to be used in validation? This paper argues that the issue of accuracy be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it is argued that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound.
Gnjidic, Danijela; Pearson, Sallie-Anne; Hilmer, Sarah N; Basilakis, Jim; Schaffer, Andrea L; Blyth, Fiona M; Banks, Emily
2015-03-30
Increasingly, automated methods are being used to code free-text medication data, but evidence on the validity of these methods is limited. To examine the accuracy of automated coding of previously keyed in free-text medication data compared with manual coding of original handwritten free-text responses (the 'gold standard'). A random sample of 500 participants (475 with and 25 without medication data in the free-text box) enrolled in the 45 and Up Study was selected. Manual coding involved medication experts keying in free-text responses and coding using Anatomical Therapeutic Chemical (ATC) codes (i.e. chemical substance 7-digit level; chemical subgroup 5-digit; pharmacological subgroup 4-digit; therapeutic subgroup 3-digit). Using keyed-in free-text responses entered by non-experts, the automated approach coded entries using the Australian Medicines Terminology database and assigned corresponding ATC codes. Based on manual coding, 1377 free-text entries were recorded and, of these, 1282 medications were coded to ATCs manually. The sensitivity of automated coding compared with manual coding was 79% (n = 1014) for entries coded at the exact ATC level, and 81.6% (n = 1046), 83.0% (n = 1064) and 83.8% (n = 1074) at the 5, 4 and 3-digit ATC levels, respectively. The sensitivity of automated coding for blank responses was 100% compared with manual coding. Sensitivity of automated coding was highest for prescription medications and lowest for vitamins and supplements, compared with the manual approach. Positive predictive values for automated coding were above 95% for 34 of the 38 individual prescription medications examined. Automated coding for free-text prescription medication data shows very high to excellent sensitivity and positive predictive values, indicating that automated methods can potentially be useful for large-scale, medication-related research.
Hoshi, M; Hiraoka, M; Hayakawa, N; Sawada, S; Munaka, M; Kuramoto, A; Oka, T; Iwatani, K; Shizuma, K; Hasai, H
1992-11-01
A benchmark test of the Monte Carlo neutron and photon transport code system (MCNP) was performed using a 252Cf fission neutron source to validate the use of the code for the energy spectrum analyses of Hiroshima atomic bomb neutrons. Nuclear data libraries used in the Monte Carlo neutron and photon transport code calculation were ENDF/B-III, ENDF/B-IV, LASL-SUB, and ENDL-73. The neutron moderators used were granite (the main component of which is SiO2, with a small fraction of hydrogen), Newlight [polyethylene with 3.7% boron (natural)], ammonium chloride (NH4Cl), and water (H2O). Each moderator was 65 cm thick. The neutron detectors were gold and nickel foils, which were used to detect thermal and epithermal neutrons (4.9 eV) and fast neutrons (> 0.5 MeV), respectively. Measured activity data from neutron-irradiated gold and nickel foils in these moderators decreased to about 1/1,000th or 1/10,000th, which correspond to about 1,500 m ground distance from the hypocenter in Hiroshima. For both gold and nickel detectors, the measured activities and the calculated values agreed within 10%. The slopes of the depth-yield relations in each moderator, except granite, were similar for neutrons detected by the gold and nickel foils. From the results of these studies, the Monte Carlo neutron and photon transport code was verified to be accurate enough for use with the elements hydrogen, carbon, nitrogen, oxygen, silicon, chlorine, and cadmium, and for the incident 252Cf fission spectrum neutrons.
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
Automated Transfer Vehicle (ATV) Critical Safety Software Overview
NASA Astrophysics Data System (ADS)
Berthelier, D.
2002-01-01
The European Automated Transfer Vehicle is an unmanned transportation system designed to dock to International Space Station (ISS) and to contribute to the logistic servicing of the ISS. Concisely, ATV control is realized by a nominal flight control function (using computers, softwares, sensors, actuators). In order to cover the extreme situations where this nominal chain can not ensure safe trajectory with respect to ISS, a segregated proximity flight safety function is activated, where unsafe free drift trajectories can be encountered. This function relies notably on a segregated computer, the Monitoring and Safing Unit (MSU) ; in case of major ATV malfunction detection, ATV is then controlled by MSU software. Therefore, this software is critical because a MSU software failure could result in catastrophic consequences. This paper provides an overview both of this software functions and of the software development and validation method which is specific considering its criticality. First part of the paper describes briefly the proximity flight safety chain. Second part deals with the software functions. Indeed, MSU software is in charge of monitoring nominal computers and ATV corridors, using its own navigation algorithms, and, if an abnormal situation is detected, it is in charge of the ATV control during the Collision Avoidance Manoeuvre (CAM) consisting in an attitude controlled braking boost, followed by a Post-CAM manoeuvre : a Sun-pointed ATV attitude control during up to 24 hours on a safe trajectory. Monitoring, navigation and control algorithms principles are presented. Third part of this paper describes the development and validation process : algorithms functional studies , ADA coding and unit validations ; algorithms ADA code integration and validation on a specific non real-time MATLAB/SIMULINK simulator ; global software functional engineering phase, architectural design, unit testing, integration and validation on target computer.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1993-01-01
Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1992-01-01
Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aly, A.; Avramova, Maria; Ivanov, Kostadin
To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less
FY2012 summary of tasks completed on PROTEUS-thermal work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.H.; Smith, M.A.
2012-06-06
PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less
Development of PRIME for irradiation performance analysis of U-Mo/Al dispersion fuel
NASA Astrophysics Data System (ADS)
Jeong, Gwan Yoon; Kim, Yeon Soo; Jeong, Yong Jin; Park, Jong Man; Sohn, Dong-Seong
2018-04-01
A prediction code for the thermo-mechanical performance of research reactor fuel (PRIME) has been developed with the implementation of developed models to analyze the irradiation behavior of U-Mo dispersion fuel. The code is capable of predicting the two-dimensional thermal and mechanical performance of U-Mo dispersion fuel during irradiation. A finite element method was employed to solve the governing equations for thermal and mechanical equilibria. Temperature- and burnup-dependent material properties of the fuel meat constituents and cladding were used. The numerical solution schemes in PRIME were verified by benchmarking solutions obtained using a commercial finite element analysis program (ABAQUS). The code was validated using irradiation data from RERTR, HAMP-1, and E-FUTURE tests. The measured irradiation data used in the validation were IL thickness, volume fractions of fuel meat constituents for the thermal analysis, and profiles of the plate thickness changes and fuel meat swelling for the mechanical analysis. The prediction results were in good agreement with the measurement data for both thermal and mechanical analyses, confirming the validity of the code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downar, Thomas
This report summarizes the current status of VERA-CS Verification and Validation for PWR Core Follow operation and proposes a multi-phase plan for continuing VERA-CS V&V in FY17 and FY18. The proposed plan recognizes the hierarchical nature of a multi-physics code system such as VERA-CS and the importance of first achieving an acceptable level of V&V on each of the single physics codes before focusing on the V&V of the coupled physics solution. The report summarizes the V&V of each of the single physics codes systems currently used for core follow analysis (ie MPACT, CTF, Multigroup Cross Section Generation, and BISONmore » / Fuel Temperature Tables) and proposes specific actions to achieve a uniformly acceptable level of V&V in FY17. The report also recognizes the ongoing development of other codes important for PWR Core Follow (e.g. TIAMAT, MAMBA3D) and proposes Phase II (FY18) VERA-CS V&V activities in which those codes will also reach an acceptable level of V&V. The report then summarizes the current status of VERA-CS multi-physics V&V for PWR Core Follow and the ongoing PWR Core Follow V&V activities for FY17. An automated procedure and output data format is proposed for standardizing the output for core follow calculations and automatically generating tables and figures for the VERA-CS Latex file. A set of acceptance metrics is also proposed for the evaluation and assessment of core follow results that would be used within the script to automatically flag any results which require further analysis or more detailed explanation prior to being added to the VERA-CS validation base. After the Automation Scripts have been completed and tested using BEAVRS, the VERA-CS plan proposes the Watts Bar cycle depletion cases should be performed with the new cross section library and be included in the first draft of the new VERA-CS manual for release at the end of PoR15. Also, within the constraints imposed by the proprietary nature of plant data, as many as possible of the FY17 AMA Plant Core Follow cases should also be included in the VERA-CS manual at the end of PoR15. After completion of the ongoing development of TIAMAT for fully coupled, full core calculations with VERA-CS / BISON 1.5D, and after the completion of the refactoring of MAMBA3D for CIPS analysis in FY17, selected cases from the VERA-CS validation based should be performed, beginning with the legacy cases of Watts Bar and BEAVRS in PoR16. Finally, as potential Phase III future work some additional considerations are identified for extending the VERA-CS V&V to other reactor types such as the BWR.« less
Shi, Lihua; Zhang, Zhe; Yu, Angela M; Wang, Wei; Wei, Zhi; Akhter, Ehtisham; Maurer, Kelly; Costa Reis, Patrícia; Song, Li; Petri, Michelle; Sullivan, Kathleen E
2014-01-01
Gene expression studies of peripheral blood mononuclear cells from patients with systemic lupus erythematosus (SLE) have demonstrated a type I interferon signature and increased expression of inflammatory cytokine genes. Studies of patients with Aicardi Goutières syndrome, commonly cited as a single gene model for SLE, have suggested that accumulation of non-coding RNAs may drive some of the pathologic gene expression, however, no RNA sequencing studies of SLE patients have been performed. This study was designed to define altered expression of coding and non-coding RNAs and to detect globally altered RNA processing in SLE. Purified monocytes from eight healthy age/gender matched controls and nine SLE patients (with low-moderate disease activity and lack of biologic drug use or immune suppressive treatment) were studied using RNA-seq. Quantitative RT-PCR was used to validate findings. Serum levels of endotoxin were measured by ELISA. We found that SLE patients had diminished expression of most endogenous retroviruses and small nucleolar RNAs, but exhibited increased expression of pri-miRNAs. Splicing patterns and polyadenylation were significantly altered. In addition, SLE monocytes expressed novel transcripts, an effect that was replicated by LPS treatment of control monocytes. We further identified increased circulating endotoxin in SLE patients. Monocytes from SLE patients exhibit globally dysregulated gene expression. The transcriptome is not simply altered by the transcriptional activation of a set of genes, but is qualitatively different in SLE. The identification of novel loci, inducible by LPS, suggests that chronic microbial translocation could contribute to the immunologic dysregulation in SLE, a new potential disease mechanism.
BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.
Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang
2017-10-27
This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.
BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements
Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang
2017-01-01
This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998
MHD code using multi graphical processing units: SMAUG+
NASA Astrophysics Data System (ADS)
Gyenge, N.; Griffiths, M. K.; Erdélyi, R.
2018-01-01
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.
Validation of suicide and self-harm records in the Clinical Practice Research Datalink
Thomas, Kyla H; Davies, Neil; Metcalfe, Chris; Windmeijer, Frank; Martin, Richard M; Gunnell, David
2013-01-01
Aims The UK Clinical Practice Research Datalink (CPRD) is increasingly being used to investigate suicide-related adverse drug reactions. No studies have comprehensively validated the recording of suicide and nonfatal self-harm in the CPRD. We validated general practitioners' recording of these outcomes using linked Office for National Statistics (ONS) mortality and Hospital Episode Statistics (HES) admission data. Methods We identified cases of suicide and self-harm recorded using appropriate Read codes in the CPRD between 1998 and 2010 in patients aged ≥15 years. Suicides were defined as patients with Read codes for suicide recorded within 95 days of their death. International Classification of Diseases codes were used to identify suicides/hospital admissions for self-harm in the linked ONS and HES data sets. We compared CPRD-derived cases/incidence of suicide and self-harm with those identified from linked ONS mortality and HES data, national suicide incidence rates and published self-harm incidence data. Results Only 26.1% (n = 590) of the ‘true’ (ONS-confirmed) suicides were identified using Read codes. Furthermore, only 55.5% of Read code-identified suicides were confirmed as suicide by the ONS data. Of the HES-identified cases of self-harm, 68.4% were identified in the CPRD using Read codes. The CPRD self-harm rates based on Read codes had similar age and sex distributions to rates observed in self-harm hospital registers, although rates were underestimated in all age groups. Conclusions The CPRD recording of suicide using Read codes is unreliable, with significant inaccuracy (over- and under-reporting). Future CPRD suicide studies should use linked ONS mortality data. The under-reporting of self-harm appears to be less marked. PMID:23216533
Evaluating a Dental Diagnostic Terminology in an Electronic Health Record
White, Joel M.; Kalenderian, Elsbeth; Stark, Paul C.; Ramoni, Rachel L.; Vaderhobli, Ram; Walji, Muhammad F.
2011-01-01
Standardized treatment procedure codes and terms are routinely used in dentistry. Utilization of a diagnostic terminology is common in medicine, but there is not a satisfactory or commonly standardized dental diagnostic terminology available at this time. Recent advances in dental informatics have provided an opportunity for inclusion of diagnostic codes and terms as part of treatment planning and documentation in the patient treatment history. This article reports the results of the use of a diagnostic coding system in a large dental school’s predoctoral clinical practice. A list of diagnostic codes and terms, called Z codes, was developed by dental faculty members. The diagnostic codes and terms were implemented into an electronic health record (EHR) for use in a predoctoral dental clinic. The utilization of diagnostic terms was quantified. The validity of Z code entry was evaluated by comparing the diagnostic term entered to the procedure performed, where valid diagnosis-procedure associations were determined by consensus among three calibrated academically based dentists. A total of 115,004 dental procedures were entered into the EHR during the year sampled. Of those, 43,053 were excluded from this analysis because they represent diagnosis or other procedures unrelated to treatments. Among the 71,951 treatment procedures, 27,973 had diagnoses assigned to them with an overall utilization of 38.9 percent. Of the 147 available Z codes, ninety-three were used (63.3 percent). There were 335 unique procedures provided and 2,127 procedure/diagnosis pairs captured in the EHR. Overall, 76.7 percent of the diagnoses entered were valid. We conclude that dental diagnostic terminology can be incorporated within an electronic health record and utilized in an academic clinical environment. Challenges remain in the development of terms and implementation and ease of use that, if resolved, would improve the utilization. PMID:21546594
42 CFR 488.8 - Federal review of accreditation organizations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... through validation surveys, the State survey agency monitors corrections as specified at § 488.7(b)(3... CMS with electronic data in ASCII comparable code and reports necessary for effective validation and...) Validation review. Following the end of a validation review period, CMS will identify any accreditation...
42 CFR 488.8 - Federal review of accreditation organizations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... through validation surveys, the State survey agency monitors corrections as specified at § 488.7(b)(3... CMS with electronic data in ASCII comparable code and reports necessary for effective validation and...) Validation review. Following the end of a validation review period, CMS will identify any accreditation...
42 CFR 488.8 - Federal review of accreditation organizations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... through validation surveys, the State survey agency monitors corrections as specified at § 488.7(b)(3... CMS with electronic data in ASCII comparable code and reports necessary for effective validation and...) Validation review. Following the end of a validation review period, CMS will identify any accreditation...
The 2014 Sandia Verification and Validation Challenge: Problem statement
Hu, Kenneth; Orient, George
2016-01-18
This paper presents a case study in utilizing information from experiments, models, and verification and validation (V&V) to support a decision. It consists of a simple system with data and models provided, plus a safety requirement to assess. The goal is to pose a problem that is flexible enough to allow challengers to demonstrate a variety of approaches, but constrained enough to focus attention on a theme. This was accomplished by providing a good deal of background information in addition to the data, models, and code, but directing the participants' activities with specific deliverables. In this challenge, the theme ismore » how to gather and present evidence about the quality of model predictions, in order to support a decision. This case study formed the basis of the 2014 Sandia V&V Challenge Workshop and this resulting special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.« less
Seligman, Sarah C; Giovannetti, Tania; Sestito, John; Libon, David J
2014-01-01
Mild functional difficulties have been associated with early cognitive decline in older adults and increased risk for conversion to dementia in mild cognitive impairment, but our understanding of this decline has been limited by a dearth of objective methods. This study evaluated the reliability and validity of a new system to code subtle errors on an established performance-based measure of everyday action and described preliminary findings within the context of a theoretical model of action disruption. Here 45 older adults completed the Naturalistic Action Test (NAT) and neuropsychological measures. NAT performance was coded for overt errors, and subtle action difficulties were scored using a novel coding system. An inter-rater reliability coefficient was calculated. Validity of the coding system was assessed using a repeated-measures ANOVA with NAT task (simple versus complex) and error type (overt versus subtle) as within-group factors. Correlation/regression analyses were conducted among overt NAT errors, subtle NAT errors, and neuropsychological variables. The coding of subtle action errors was reliable and valid, and episodic memory breakdown predicted subtle action disruption. Results suggest that the NAT can be useful in objectively assessing subtle functional decline. Treatments targeting episodic memory may be most effective in addressing early functional impairment in older age.
Expert system validation in prolog
NASA Technical Reports Server (NTRS)
Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline
1988-01-01
An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.
McCrea, Simon M
2007-06-18
Naming and localization of individual body part words to a high-resolution line drawing of a full human figure was tested in a mixed-sex sample of nine right handed subjects. Activation within the superior medial left parietal cortex and bilateral dorsolateral cortex was consistent with involvement of the body schema which is a dynamic postural self-representation coding and combining sensory afference and motor efference inputs/outputs that is automatic and nonconscious. Additional activation of the left rostral occipitotemporal cortex was consistent with involvement of the neural correlates of the verbalizable body structural description that encodes semantic and categorical representations to animate objects such as full human figures. The results point to a highly distributed cortical representation for the encoding and manipulation of body part information and highlight the need for the incorporation of more ecologically valid measures of body schema coding in future functional neuroimaging studies.
FY16 ASME High Temperature Code Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, M. J.; Jetter, R. I.; Sham, T. -L.
2016-09-01
One of the objectives of the ASME high temperature Code activities is to develop and validate both improvements and the basic features of Section III, Division 5, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to be used to assess whether or not a specific component under specified loading conditions will satisfy the elevated temperature design requirements for Class A components in Section III, Division 5, Subsection HB, Subpart B (HBB). There are many features and alternative paths of varying complexity in HBB. The initial focus of this task is amore » basic path through the various options for a single reference material, 316H stainless steel. However, the program will be structured for eventual incorporation all the features and permitted materials of HBB. Since this task has recently been initiated, this report focuses on the description of the initial path forward and an overall description of the approach to computer program development.« less
Stalfors, J; Enoksson, F; Hermansson, A; Hultcrantz, M; Robinson, Å; Stenfeldt, K; Groth, A
2013-04-01
To investigate the internal validity of the diagnosis code used at discharge after treatment of acute mastoiditis. Retrospective national re-evaluation study of patient records 1993-2007 and make comparison with the original ICD codes. All ENT departments at university hospitals and one large county hospital department in Sweden. A total of 1966 records were reviewed for patients with ICD codes for in-patient treatment of acute (529), chronic (44) and unspecified mastoiditis (21) and acute otitis media (1372). ICD codes were reviewed by the authors with a defined protocol for the clinical diagnosis of acute mastoiditis. Those not satisfying the diagnosis were given an alternative diagnosis. Of 529 records with ICD coding for acute mastoiditis, 397 (75%) were found to meet the definition of acute mastoiditis used in this study, while 18% were not diagnosed as having any type of mastoiditis after review. Review of the in-patients treated for acute media otitis identified an additional 60 cases fulfilling the definition of acute mastoiditis. Overdiagnosis was common, and many patients with a diagnostic code indicating acute mastoiditis had been treated for external otitis or otorrhoea with transmyringeal drainage. The internal validity of the diagnosis acute mastoiditis is dependent on the use of standardised, well-defined criteria. Reliability of diagnosis is fundamental for the comparison of results from different studies. Inadequate reliability in the diagnosis of acute mastoiditis also affects calculations of incidence rates and statistical power and may also affect the conclusions drawn from the results. © 2013 Blackwell Publishing Ltd.
An efficient code for the simulation of nonhydrostatic stratified flow over obstacles
NASA Technical Reports Server (NTRS)
Pihos, G. G.; Wurtele, M. G.
1981-01-01
The physical model and computational procedure of the code is described in detail. The code is validated in tests against a variety of known analytical solutions from the literature and is also compared against actual mountain wave observations. The code will receive as initial input either mathematically idealized or discrete observational data. The form of the obstacle or mountain is arbitrary.
Niedhammer, Isabelle; Milner, Allison; LaMontagne, Anthony D; Chastang, Jean-François
2018-03-08
The objectives of the study were to construct a job-exposure matrix (JEM) for psychosocial work factors of the job strain model, to evaluate its validity, and to compare the results over time. The study was based on national representative data of the French working population with samples of 46,962 employees (2010 SUMER survey) and 24,486 employees (2003 SUMER survey). Psychosocial work factors included the job strain model factors (Job Content Questionnaire): psychological demands, decision latitude, social support, job strain and iso-strain. Job title was defined by three variables: occupation and economic activity coded using standard classifications, and company size. A JEM was constructed using a segmentation method (Classification and Regression Tree-CART) and cross-validation. The best quality JEM was found using occupation and company size for social support. For decision latitude and psychological demands, there was not much difference using occupation and company size with or without economic activity. The validity of the JEM estimates was higher for decision latitude, job strain and iso-strain, and lower for social support and psychological demands. Differential changes over time were observed for psychosocial work factors according to occupation, economic activity and company size. This study demonstrated that company size in addition to occupation may improve the validity of JEMs for psychosocial work factors. These matrices may be time-dependent and may need to be updated over time. More research is needed to assess the validity of JEMs given that these matrices may be able to provide exposure assessments to study a range of health outcomes.
Anthranilate-Activating Modules from Fungal Nonribosomal Peptide Assembly Lines†
Ames, Brian D.; Walsh, Christopher T.
2010-01-01
Fungal natural products containing benzodiazepinone- and quinazolinone-fused ring systems can be assembled by nonribosomal peptide synthetases (NRPS) using the conformationally restricted β-amino acid anthranilate as one of the key building blocks. We validated that the first module of the acetylaszonalenin synthetase of Neosartorya fischeri NRRL 181 activates anthranilate to anthranilyl-AMP. With this as starting point, we then used bioinformatic predictions about fungal adenylation domain selectivities to identify and confirm an anthranilate-activating module in the fumiquinazoline A producer Aspergillus fumigatus Af293 as well as a second anthranilate-activating NRPS in N. fischeri. This establishes an anthranilate adenylation domain code for fungal NRPS and should facilitate detection and cloning of gene clusters for benzodiazepine- and quinazoline-containing polycyclic alkaloids with a wide range of biological activities. PMID:20225828
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, A.; Canepa, S.; Zerkak, O.
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
Tonarelli, Silvina B; Tibbs, Michael; Vazquez, Gabriela; Lakshminarayan, Kamakshi; Rodriguez, Gustavo J; Qureshi, Adnan I
2012-02-01
A new International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis code, V45.88, was approved by the Centers for Medicare and Medicaid Services (CMS) on October 1, 2008. This code identifies patients in whom intravenous (IV) recombinant tissue plasminogen activator (rt-PA) is initiated in one hospital's emergency department, followed by transfer within 24 hours to a comprehensive stroke center, a paradigm commonly referred to as "drip-and-ship." This study assessed the use and accuracy of the new V45.88 code for identifying ischemic stroke patients who meet the criteria for drip-and-ship at 2 advanced certified primary stroke centers. Consecutive patients over a 12-month period were identified by primary ICD-9-CM diagnosis codes related to ischemic stroke. The accuracy of V45.88 code utilization using administrative data provided by Health Information Management Services was assessed through a comparison with data collected in prospective stroke registries maintained at each hospital by a trained abstractor. Out of a total of 428 patients discharged from both hospitals with a diagnosis of ischemic stroke, 37 patients were given ICD-9-CM code V45.88. The internally validated data from the prospective stroke database demonstrated that a total of 40 patients met the criteria for drip-and-ship. A concurrent comparison found that 92% (sensitivity) of the patients treated with drip-and-ship were coded with V45.88. None of the non-drip-and-ship stroke cases received the V45.88 code (100% specificity). The new ICD-9-CM code for drip-and-ship appears to have high specificity and sensitivity, allowing effective data collection by the CMS. Copyright © 2012 National Stroke Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K
2006-04-05
Cross-comparison of the results of two computer codes for the same problem provides a mutual validation of their computational methods. This cross-validation exercise was performed for LLNL's ALE3D code and AKTS's Thermal Safety code, using the thermal ignition of HMX in two standard LLNL cookoff experiments: the One-Dimensional Time to Explosion (ODTX) test and the Scaled Thermal Explosion (STEX) test. The chemical kinetics model used in both codes was the extended Prout-Tompkins model, a relatively new addition to ALE3D. This model was applied using ALE3D's new pseudospecies feature. In addition, an advanced isoconversional kinetic approach was used in the AKTSmore » code. The mathematical constants in the Prout-Tompkins code were calibrated using DSC data from hermetically sealed vessels and the LLNL optimization code Kinetics05. The isoconversional kinetic parameters were optimized using the AKTS Thermokinetics code. We found that the Prout-Tompkins model calculations agree fairly well between the two codes, and the isoconversional kinetic model gives very similar results as the Prout-Tompkins model. We also found that an autocatalytic approach in the beta-delta phase transition model does affect the times to explosion for some conditions, especially STEX-like simulations at ramp rates above 100 C/hr, and further exploration of that effect is warranted.« less
Neutronic calculation of fast reactors by the EUCLID/V1 integrated code
NASA Astrophysics Data System (ADS)
Koltashev, D. A.; Stakhanova, A. A.
2017-01-01
This article considers neutronic calculation of a fast-neutron lead-cooled reactor BREST-OD-300 by the EUCLID/V1 integrated code. The main goal of development and application of integrated codes is a nuclear power plant safety justification. EUCLID/V1 is integrated code designed for coupled neutronics, thermomechanical and thermohydraulic fast reactor calculations under normal and abnormal operating conditions. EUCLID/V1 code is being developed in the Nuclear Safety Institute of the Russian Academy of Sciences. The integrated code has a modular structure and consists of three main modules: thermohydraulic module HYDRA-IBRAE/LM/V1, thermomechanical module BERKUT and neutronic module DN3D. In addition, the integrated code includes databases with fuel, coolant and structural materials properties. Neutronic module DN3D provides full-scale simulation of neutronic processes in fast reactors. Heat sources distribution, control rods movement, reactivity level changes and other processes can be simulated. Neutron transport equation in multigroup diffusion approximation is solved. This paper contains some calculations implemented as a part of EUCLID/V1 code validation. A fast-neutron lead-cooled reactor BREST-OD-300 transient simulation (fuel assembly floating, decompression of passive feedback system channel) and cross-validation with MCU-FR code results are presented in this paper. The calculations demonstrate EUCLID/V1 code application for BREST-OD-300 simulating and safety justification.
Chung, Cecilia P; Rohan, Patricia; Krishnaswami, Shanthi; McPheeters, Melissa L
2013-12-30
To review the evidence supporting the validity of billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify patients with rheumatoid arthritis (RA) in administrative and claim databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to RA and reference lists of included studies were searched. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria and extracted the data. Data collected included participant and algorithm characteristics. Nine studies reported validation of computer algorithms based on International Classification of Diseases (ICD) codes with or without free-text, medication use, laboratory data and the need for a diagnosis by a rheumatologist. These studies yielded positive predictive values (PPV) ranging from 34 to 97% to identify patients with RA. Higher PPVs were obtained with the use of at least two ICD and/or procedure codes (ICD-9 code 714 and others), the requirement of a prescription of a medication used to treat RA, or requirement of participation of a rheumatologist in patient care. For example, the PPV increased from 66 to 97% when the use of disease-modifying antirheumatic drugs and the presence of a positive rheumatoid factor were required. There have been substantial efforts to propose and validate algorithms to identify patients with RA in automated databases. Algorithms that include more than one code and incorporate medications or laboratory data and/or required a diagnosis by a rheumatologist may increase the PPV. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling and Analysis of Actinide Diffusion Behavior in Irradiated Metal Fuel
NASA Astrophysics Data System (ADS)
Edelmann, Paul G.
There have been numerous attempts to model fast reactor fuel behavior in the last 40 years. The US currently does not have a fully reliable tool to simulate the behavior of metal fuels in fast reactors. The experimental database necessary to validate the codes is also very limited. The DOE-sponsored Advanced Fuels Campaign (AFC) has performed various experiments that are ready for analysis. Current metal fuel performance codes are either not available to the AFC or have limitations and deficiencies in predicting AFC fuel performance. A modified version of a new fuel performance code, FEAST-Metal , was employed in this investigation with useful results. This work explores the modeling and analysis of AFC metallic fuels using FEAST-Metal, particularly in the area of constituent actinide diffusion behavior. The FEAST-Metal code calculations for this work were conducted at Los Alamos National Laboratory (LANL) in support of on-going activities related to sensitivity analysis of fuel performance codes. A sensitivity analysis of FEAST-Metal was completed to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. A modification was made to the FEAST-Metal constituent redistribution model to enable accommodation of newer AFC metal fuel compositions with verified results. Applicability of this modified model for sodium fast reactor metal fuel design is demonstrated.
Peng, Mingkai; Chen, Guanmin; Kaplan, Gilaad G.; Lix, Lisa M.; Drummond, Neil; Lucyk, Kelsey; Garies, Stephanie; Lowerison, Mark; Weibe, Samuel; Quan, Hude
2016-01-01
Background Electronic medical records (EMR) can be a cost-effective source for hypertension surveillance. However, diagnosis of hypertension in EMR is commonly under-coded and warrants the needs to review blood pressure and antihypertensive drugs for hypertension case identification. Methods We included all the patients actively registered in The Health Improvement Network (THIN) database, UK, on 31 December 2011. Three case definitions using diagnosis code, antihypertensive drug prescriptions and abnormal blood pressure, respectively, were used to identify hypertension patients. We compared the prevalence and treatment rate of hypertension in THIN with results from Health Survey for England (HSE) in 2011. Results Compared with prevalence reported by HSE (29.7%), the use of diagnosis code alone (14.0%) underestimated hypertension prevalence. The use of any of the definitions (38.4%) or combination of antihypertensive drug prescriptions and abnormal blood pressure (38.4%) had higher prevalence than HSE. The use of diagnosis code or two abnormal blood pressure records with a 2-year period (31.1%) had similar prevalence and treatment rate of hypertension with HSE. Conclusions Different definitions should be used for different study purposes. The definition of ‘diagnosis code or two abnormal blood pressure records with a 2-year period’ could be used for hypertension surveillance in THIN. PMID:26547088
CERT TST November 2016 Visit Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, Robert Currier; Bailey, Teresa S.; Kahler, III, Albert Comstock
2017-04-27
The dozen plus presentations covered the span of the Center’s activities, including experimental progress, simulations of the experiments (both for calibration and validation), UQ analysis, nuclear data impacts, status of simulation codes, methods development, computational science progress, and plans for upcoming priorities. All three institutions comprising the Center (Texas A&M, University of Colorado Boulder, and Simon Fraser University) were represented. Center-supported students not only gave two of the oral presentations, but also highlighted their research in a number of excellent posters.
2011-01-01
all panels of a test were recorded, it was reduced into text format and then input into the code. B. Current Development The capabilities...due to fragmentation. Any or all of these models can be activated for a particular lethality assessment. Incapacitation criteria of different times...defined for all fragments represented in the file. Only the fragment material density needs to be set by the user. 3DPIMMS accounts for some statistical
Irma 5.2 multi-sensor signature prediction model
NASA Astrophysics Data System (ADS)
Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles
2007-04-01
The Irma synthetic signature prediction code is being developed by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN) to facilitate the research and development of multi-sensor systems. There are over 130 users within the Department of Defense, NASA, Department of Transportation, academia, and industry. Irma began as a high-resolution, physics-based Infrared (IR) target and background signature model for tactical weapon applications and has grown to include: a laser (or active) channel (1990), improved scene generator to support correlated frame-to-frame imagery (1992), and passive IR/millimeter wave (MMW) channel for a co-registered active/passive IR/MMW model (1994). Irma version 5.0 was released in 2000 and encompassed several upgrades to both the physical models and software; host support was expanded to Windows, Linux, Solaris, and SGI Irix platforms. In 2005, version 5.1 was released after an extensive verification and validation of an upgraded and reengineered active channel. Since 2005, the reengineering effort has focused on the Irma passive channel. Field measurements for the validation effort include the unpolarized data collection. Irma 5.2 is scheduled for release in the summer of 2007. This paper will report the validation test results of the Irma passive models and discuss the new features in Irma 5.2.
The Deceptive Mean: Conceptual Scoring of Cloze Entries Differentially Advantages More Able Readers
ERIC Educational Resources Information Center
O'Toole, J. M.; King, R. A. R.
2011-01-01
The "cloze" test is one possible investigative instrument for predicting text comprehensibility. Conceptual coding of student replacement of deleted words has been considered to be more valid than exact coding, partly because conceptual coding seemed fairer to poorer readers. This paper reports a quantitative study of 447 Australian…
Tracking Holland Interest Codes: The Case of South African Field Guides
ERIC Educational Resources Information Center
Watson, Mark B.; Foxcroft, Cheryl D.; Allen, Lynda J.
2007-01-01
Holland believes that specific personality types seek out matching occupational environments and his theory codes personality and environment according to a six letter interest typology. Since 1985 there have been numerous American studies that have queried the validity of Holland's coding system. Research in South Africa is scarcer, despite…
NASA Astrophysics Data System (ADS)
Class, G.; Meyder, R.; Stratmanns, E.
1985-12-01
The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS
Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...
2015-04-22
The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.; Matarrese, Raffaella; Klemm, Frank J., Jr.
2006-09-01
A vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), which enables accounting for radiation polarization, has been developed and validated against a Monte Carlo code, Coulson's tabulated values, and MOBY (Marine Optical Buoy System) water-leaving reflectance measurements. The developed code was also tested against the scalar codes SHARM, DISORT, and MODTRAN to evaluate its performance in scalar mode and the influence of polarization. The obtained results have shown a good agreement of 0.7% in comparison with the Monte Carlo code, 0.2% for Coulson's tabulated values, and 0.001-0.002 for the 400-550 nm region for the MOBY reflectances. Ignoring the effects of polarization led to large errors in calculated top-of-atmosphere reflectances: more than 10% for a molecular atmosphere and up to 5% for an aerosol atmosphere. This new version of 6S is intended to replace the previous scalar version used for calculation of lookup tables in the MODIS (Moderate Resolution Imaging Spectroradiometer) atmospheric correction algorithm.
Kotchenova, Svetlana Y; Vermote, Eric F; Matarrese, Raffaella; Klemm, Frank J
2006-09-10
A vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), which enables accounting for radiation polarization, has been developed and validated against a Monte Carlo code, Coulson's tabulated values, and MOBY (Marine Optical Buoy System) water-leaving reflectance measurements. The developed code was also tested against the scalar codes SHARM, DISORT, and MODTRAN to evaluate its performance in scalar mode and the influence of polarization. The obtained results have shown a good agreement of 0.7% in comparison with the Monte Carlo code, 0.2% for Coulson's tabulated values, and 0.001-0.002 for the 400-550 nm region for the MOBY reflectances. Ignoring the effects of polarization led to large errors in calculated top-of-atmosphere reflectances: more than 10% for a molecular atmosphere and up to 5% for an aerosol atmosphere. This new version of 6S is intended to replace the previous scalar version used for calculation of lookup tables in the MODIS (Moderate Resolution Imaging Spectroradiometer) atmospheric correction algorithm.
Computation of Thermally Perfect Compressible Flow Properties
NASA Technical Reports Server (NTRS)
Witte, David W.; Tatum, Kenneth E.; Williams, S. Blake
1996-01-01
A set of compressible flow relations for a thermally perfect, calorically imperfect gas are derived for a value of c(sub p) (specific heat at constant pressure) expressed as a polynomial function of temperature and developed into a computer program, referred to as the Thermally Perfect Gas (TPG) code. The code is available free from the NASA Langley Software Server at URL http://www.larc.nasa.gov/LSS. The code produces tables of compressible flow properties similar to those found in NACA Report 1135. Unlike the NACA Report 1135 tables which are valid only in the calorically perfect temperature regime the TPG code results are also valid in the thermally perfect, calorically imperfect temperature regime, giving the TPG code a considerably larger range of temperature application. Accuracy of the TPG code in the calorically perfect and in the thermally perfect, calorically imperfect temperature regimes are verified by comparisons with the methods of NACA Report 1135. The advantages of the TPG code compared to the thermally perfect, calorically imperfect method of NACA Report 1135 are its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture of gases, ease-of-use, and tabulated results.
Validation of the SINDA/FLUINT code using several analytical solutions
NASA Technical Reports Server (NTRS)
Keller, John R.
1995-01-01
The Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) code has often been used to determine the transient and steady-state response of various thermal and fluid flow networks. While this code is an often used design and analysis tool, the validation of this program has been limited to a few simple studies. For the current study, the SINDA/FLUINT code was compared to four different analytical solutions. The thermal analyzer portion of the code (conduction and radiative heat transfer, SINDA portion) was first compared to two separate solutions. The first comparison examined a semi-infinite slab with a periodic surface temperature boundary condition. Next, a small, uniform temperature object (lumped capacitance) was allowed to radiate to a fixed temperature sink. The fluid portion of the code (FLUINT) was also compared to two different analytical solutions. The first study examined a tank filling process by an ideal gas in which there is both control volume work and heat transfer. The final comparison considered the flow in a pipe joining two infinite reservoirs of pressure. The results of all these studies showed that for the situations examined here, the SINDA/FLUINT code was able to match the results of the analytical solutions.
Kotchenova, Svetlana Y; Vermote, Eric F
2007-07-10
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.
2007-07-01
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
On the error statistics of Viterbi decoding and the performance of concatenated codes
NASA Technical Reports Server (NTRS)
Miller, R. L.; Deutsch, L. J.; Butman, S. A.
1981-01-01
Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.
Quicklook overview of model changes in Melcor 2.2: Rev 6342 to Rev 9496
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.
2017-05-01
MELCOR 2.2 is a significant official release of the MELCOR code with many new models and model improvements. This report provides the code user with a quick review and characterization of new models added, changes to existing models, the effect of code changes during this code development cycle (rev 6342 to rev 9496), a preview of validation results with this code version. More detailed information is found in the code Subversion logs as well as the User Guide and Reference Manuals.
1994-08-01
volume H1. Le rapport ext accompagnt5 doun jeo die disqoettex contenant les donn~es appropri~es Li bous let cas d’essai. (’es disqoettes sont disponibles ...GERMANY PURPL’Sb OF THE TESi The tests are part of a larger effort to establish a database of experimental measurements for missile configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sublet, J.-Ch.; Koning, A.J.; Forrest, R.A.
The reasons for the conversion of the European Activation File, EAF into ENDF-6 format are threefold. First, it significantly enhances the JEFF-3.0 release by the addition of an activation file. Second, to considerably increase its usage by using a recognized, official file format, allowing existing plug-in processes to be effective; and third, to move towards a universal nuclear data file in contrast to the current separate general and special-purpose files. The format chosen for the JEFF-3.0/A file uses reaction cross sections (MF-3), cross sections (MF-10), and multiplicities (MF-9). Having the data in ENDF-6 format allows the ENDF suite of utilitiesmore » and checker codes to be used alongside many other utility, visualizing, and processing codes. It is based on the EAF activation file used for many applications from fission to fusion, including dosimetry, inventories, depletion-transmutation, and geophysics. JEFF-3.0/A takes advantage of four generations of EAF files. Extensive benchmarking activities on these files provide feedback and validation with integral measurements. These, in parallel with a detailed graphical analysis based on EXFOR, have been applied stimulating new measurements, significantly increasing the quality of this activation file. The next step is to include the EAF uncertainty data for all channels into JEFF-3.0/A.« less
NASA Astrophysics Data System (ADS)
Tanaka, Ken-ichi; Ueno, Jun
2017-09-01
Reliable information of radioactivity inventory resulted from the radiological characterization is important in order to plan decommissioning planning and is also crucial in order to promote decommissioning in effectiveness and in safe. The information is referred to by planning of decommissioning strategy and by an application to regulator. Reliable information of radioactivity inventory can be used to optimize the decommissioning processes. In order to perform the radiological characterization reliably, we improved a procedure of an evaluation of neutron-activated materials for a Boiling Water Reactor (BWR). Neutron-activated materials are calculated with calculation codes and their validity should be verified with measurements. The evaluation of neutron-activated materials can be divided into two processes. One is a distribution calculation of neutron-flux. Another is an activation calculation of materials. The distribution calculation of neutron-flux is performed with neutron transport calculation codes with appropriate cross section library to simulate neutron transport phenomena well. Using the distribution of neutron-flux, we perform distribution calculations of radioactivity concentration. We also estimate a time dependent distribution of radioactivity classification and a radioactive-waste classification. The information obtained from the evaluation is utilized by other tasks in the preparatory tasks to make the decommissioning plan and the activity safe and rational.
BACT Simulation User Guide (Version 7.0)
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1997-01-01
This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.
The Fast Scattering Code (FSC): Validation Studies and Program Guidelines
NASA Technical Reports Server (NTRS)
Tinetti, Ana F.; Dunn, Mark H.
2011-01-01
The Fast Scattering Code (FSC) is a frequency domain noise prediction program developed at the NASA Langley Research Center (LaRC) to simulate the acoustic field produced by the interaction of known, time harmonic incident sound with bodies of arbitrary shape and surface impedance immersed in a potential flow. The code uses the equivalent source method (ESM) to solve an exterior 3-D Helmholtz boundary value problem (BVP) by expanding the scattered acoustic pressure field into a series of point sources distributed on a fictitious surface placed inside the actual scatterer. This work provides additional code validation studies and illustrates the range of code parameters that produce accurate results with minimal computational costs. Systematic noise prediction studies are presented in which monopole generated incident sound is scattered by simple geometric shapes - spheres (acoustically hard and soft surfaces), oblate spheroids, flat disk, and flat plates with various edge topologies. Comparisons between FSC simulations and analytical results and experimental data are presented.
Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M
2018-01-01
Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Schweizer, Marin L.; Eber, Michael R.; Laxminarayan, Ramanan; Furuno, Jon P.; Popovich, Kyle J.; Hota, Bala; Rubin, Michael A.; Perencevich, Eli N.
2013-01-01
BACKGROUND AND OBJECTIVE Investigators and medical decision makers frequently rely on administrative databases to assess methicillin-resistant Staphylococcus aureus (MRSA) infection rates and outcomes. The validity of this approach remains unclear. We sought to assess the validity of the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) code for infection with drug-resistant microorganisms (V09) for identifying culture-proven MRSA infection. DESIGN Retrospective cohort study. METHODS All adults admitted to 3 geographically distinct hospitals between January 1, 2001, and December 31, 2007, were assessed for presence of incident MRSA infection, defined as an MRSA-positive clinical culture obtained during the index hospitalization, and presence of the V09 ICD-9-CM code. The k statistic was calculated to measure the agreement between presence of MRSA infection and assignment of the V09 code. Sensitivities, specificities, positive predictive values, and negative predictive values were calculated. RESULTS There were 466,819 patients discharged during the study period. Of the 4,506 discharged patients (1.0%) who had the V09 code assigned, 31% had an incident MRSA infection, 20% had prior history of MRSA colonization or infection but did not have an incident MRSA infection, and 49% had no record of MRSA infection during the index hospitalization or the previous hospitalization. The V09 code identified MRSA infection with a sensitivity of 24% (range, 21%–34%) and positive predictive value of 31% (range, 22%–53%). The agreement between assignment of the V09 code and presence of MRSA infection had a κ coefficient of 0.26 (95% confidence interval, 0.25–0.27). CONCLUSIONS In its current state, the ICD-9-CM code V09 is not an accurate predictor of MRSA infection and should not be used to measure rates of MRSA infection. PMID:21460469
Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code
NASA Technical Reports Server (NTRS)
Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.
2003-01-01
Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.
A method for radiological characterization based on fluence conversion coefficients
NASA Astrophysics Data System (ADS)
Froeschl, Robert
2018-06-01
Radiological characterization of components in accelerator environments is often required to ensure adequate radiation protection during maintenance, transport and handling as well as for the selection of the proper disposal pathway. The relevant quantities are typical the weighted sums of specific activities with radionuclide-specific weighting coefficients. Traditional methods based on Monte Carlo simulations are radionuclide creation-event based or the particle fluences in the regions of interest are scored and then off-line weighted with radionuclide production cross sections. The presented method bases the radiological characterization on a set of fluence conversion coefficients. For a given irradiation profile and cool-down time, radionuclide production cross-sections, material composition and radionuclide-specific weighting coefficients, a set of particle type and energy dependent fluence conversion coefficients is computed. These fluence conversion coefficients can then be used in a Monte Carlo transport code to perform on-line weighting to directly obtain the desired radiological characterization, either by using built-in multiplier features such as in the PHITS code or by writing a dedicated user routine such as for the FLUKA code. The presented method has been validated against the standard event-based methods directly available in Monte Carlo transport codes.
Specifications and programs for computer software validation
NASA Technical Reports Server (NTRS)
Browne, J. C.; Kleir, R.; Davis, T.; Henneman, M.; Haller, A.; Lasseter, G. L.
1973-01-01
Three software products developed during the study are reported and include: (1) FORTRAN Automatic Code Evaluation System, (2) the Specification Language System, and (3) the Array Index Validation System.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, Paolo; Theiler, C.; Fasoli, A.
A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less
Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.
Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia
2016-01-01
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Tang, S.; Thome, K.; Pace, D.; Heidbrink, W. W.; Carter, T. A.; Crocker, N. A.; NSTX-U Collaboration; DIII-D Collaboration
2017-10-01
An experimental investigation of the stability of Doppler-shifted cyclotron resonant compressional Alfvén eigenmodes (CAE) using the flexible DIII-D neutral beams has begun to validate a theoretical understanding and realize the CAE's diagnostic potential. CAEs are excited by energetic ions from neutral beams [Heidbrink, NF 2006], with frequencies and toroidal mode numbers sensitive to the fast-ion phase space distribution, making them a potentially powerful passive diagnostic. The experiment also contributes to a predictive capability for spherical tokamak temperature profiles, where CAEs may play a role in energy transport [Crocker, NF 2013]. CAE activity was observed using the recently developed Ion Cyclotron Emission diagnostic-high bandwidth edge magnetic sensors sampled at 200 MS/s. Preliminary results show CAEs become unstable in BT ramp discharges below a critical threshold in the range 1.7 - 1.9 T, with the exact value increasing as density increases. The experiment will be used to validate simulations from relevant codes such as the Hybrid MHD code [Belova, PRL 2015]. This work was supported by US DOE Grants DE-SC0011810 and DE-FC02-04ER54698.
Study on detection geometry and detector shielding for portable PGNAA system using PHITS
NASA Astrophysics Data System (ADS)
Ithnin, H.; Dahing, L. N. S.; Lip, N. M.; Rashid, I. Q. Abd; Mohamad, E. J.
2018-01-01
Prompt gamma-ray neutron activation analysis (PGNAA) measurements require efficient detectors for gamma-ray detection. Apart from experimental studies, the Monte Carlo (MC) method has become one of the most popular tools in detector studies. The absolute efficiency for a 2 × 2 inch cylindrical Sodium Iodide (NaI) detector has been modelled using the PHITS software and compared with previous studies in literature. In the present work, PHITS code is used for optimization of portable PGNAA system using the validated NaI detector. The detection geometry is optimized by moving the detector along the sample to find the highest intensity of the prompt gamma generated from the sample. Shielding material for the validated NaI detector is also studied to find the best option for the PGNAA system setup. The result shows the optimum distance for detector is on the surface of the sample and around 15 cm from the source. The results specify that this process can be followed to determine the best setup for PGNAA system for a different sample size and detector type. It can be concluded that data from PHITS code is a strong tool not only for efficiency studies but also for optimization of PGNAA system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez Gonzalez, R.; Petruzzi, A.; D'Auria, F.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and (e.g., oblique Control Rods, Positive Void coefficient) required a developed and validated complex three dimensional (3D) neutron kinetics (NK) coupled thermal hydraulic (TH) model. Reactor shut-down is obtained by oblique CRs and, during accidental conditions, by an emergency shut-down system (JDJ) injecting a highly concentrated boron solution (boron clouds) in the moderator tank, the boron clouds reconstruction is obtained using a CFD (CFX) code calculation. A complete LBLOCA calculation implies the application of the RELAP5-3D{sup C} system code. Within the framework of themore » third Agreement 'NA-SA - Univ. of Pisa' a new RELAP5-3D control system for the boron injection system was developed and implemented in the validated coupled RELAP5-3D/NESTLE model of the Atucha 2 NPP. The aim of this activity is to find out the limiting case (maximum break area size) for the Peak Cladding Temperature for LOCAs under fixed boundary conditions. (authors)« less
User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Coleman, Kayla; Hooper, Russell W.
2016-10-04
In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers. More specifically, the CASL VUQ Strategy [33] prescribes the use of Predictive Capability Maturity Model (PCMM) assessments [37]. PCMM is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated with an intended application. Exercising a computational model with the methodsmore » in Dakota will yield, in part, evidence for a predictive capability maturity model (PCMM) assessment. Table 1.1 summarizes some key predictive maturity related activities (see details in [33]), with examples of how Dakota fits in. This manual offers CASL partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.« less
Validation of WBMOD in the Southeast Asian region
NASA Astrophysics Data System (ADS)
Cervera, M. A.; Thomas, R. M.; Groves, K. M.; Ramli, A. G.; Effendy
2001-01-01
The scintillation modeling code WBMOD, developed at North West Research, provides a global description of scintillation occurrence. However, the model has had limited calibration globally. Thus its performance in localized regions such as Australia-Southeast Asia is required to be evaluated. The Defence Science and Technology Organisation, Australia, in conjunction with Indonesian National Institute of Aeronautics and Space (LAPAN), Defence Science and Technology Centre, Malaysia, Air Force Research laboratory, United States, and IPS Radio and Space Services of Australia, has commissioned a network of GPS receivers to measure scintillation from sites in the region. One of the objectives of this deployment is to carry out a validation of WBMOD in the region. This paper describes the network of GPS receivers used to record the scintillation data. The details of the procedure used to validate WBMOD are given and results of the validation are presented for data collected during 1998 and 1999 from two sites, one situated in the southern anomaly region and the other situated near the geomagnetic equator. We found good overall agreement between WBMOD and the observations for low sunspot numbers at both sites, although some differences were noted, the major one being that the scintillation activity predicted by WBMOD tended to cut off too early in the night. At higher levels of sunspot activity, while WBMOD agreed with the observations in the southern anomaly region, we found that it significantly underestimated the level of scintillation activity at the geomagnetic equator.
Rück, Christian; Larsson, K Johan; Lind, Kristina; Perez-Vigil, Ana; Isomura, Kayoko; Sariaslan, Amir; Lichtenstein, Paul; Mataix-Cols, David
2015-06-22
The usefulness of cases diagnosed in administrative registers for research purposes is dependent on diagnostic validity. This study aimed to investigate the validity and inter-rater reliability of recorded diagnoses of tic disorders and obsessive-compulsive disorder (OCD) in the Swedish National Patient Register (NPR). Chart review of randomly selected register cases and controls. 100 tic disorder cases and 100 OCD cases were randomly selected from the NPR based on codes from the International Classification of Diseases (ICD) 8th, 9th and 10th editions, together with 50 epilepsy and 50 depression control cases. The obtained psychiatric records were blindly assessed by 2 senior psychiatrists according to the criteria of the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) and ICD-10. Positive predictive value (PPV; cases diagnosed correctly divided by the sum of true positives and false positives). Between 1969 and 2009, the NPR included 7286 tic disorder and 24,757 OCD cases. The vast majority (91.3% of tic cases and 80.1% of OCD cases) are coded with the most recent ICD version (ICD-10). For tic disorders, the PPV was high across all ICD versions (PPV=89% in ICD-8, 86% in ICD-9 and 97% in ICD-10). For OCD, only ICD-10 codes had high validity (PPV=91-96%). None of the epilepsy or depression control cases were wrongly diagnosed as having tic disorders or OCD, respectively. Inter-rater reliability was outstanding for both tic disorders (κ=1) and OCD (κ=0.98). The validity and reliability of ICD codes for tic disorders and OCD in the Swedish NPR is generally high. We propose simple algorithms to further increase the confidence in the validity of these codes for epidemiological research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Final Technical Report for GO17004 Regulatory Logic: Codes and Standards for the Hydrogen Economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakarado, Gary L.
The objectives of this project are to: develop a robust supporting research and development program to provide critical hydrogen behavior data and a detailed understanding of hydrogen combustion and safety across a range of scenarios, needed to establish setback distances in building codes and minimize the overall data gaps in code development; support and facilitate the completion of technical specifications by the International Organization for Standardization (ISO) for gaseous hydrogen refueling (TS 20012) and standards for on-board liquid (ISO 13985) and gaseous or gaseous blend (ISO 15869) hydrogen storage by 2007; support and facilitate the effort, led by the NFPA,more » to complete the draft Hydrogen Technologies Code (NFPA 2) by 2008; with experimental data and input from Technology Validation Program element activities, support and facilitate the completion of standards for bulk hydrogen storage (e.g., NFPA 55) by 2008; facilitate the adoption of the most recently available model codes (e.g., from the International Code Council [ICC]) in key regions; complete preliminary research and development on hydrogen release scenarios to support the establishment of setback distances in building codes and provide a sound basis for model code development and adoption; support and facilitate the development of Global Technical Regulations (GTRs) by 2010 for hydrogen vehicle systems under the United Nations Economic Commission for Europe, World Forum for Harmonization of Vehicle Regulations and Working Party on Pollution and Energy Program (ECE-WP29/GRPE); and to Support and facilitate the completion by 2012 of necessary codes and standards needed for the early commercialization and market entry of hydrogen energy technologies.« less
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Nealy, John E.; Wilson, John W.
2011-10-01
The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of radiation environmental models, nuclear transport code algorithms and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo-Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate six degree of freedom (DOF) description of ISS trajectory and orientation. It is imperative that we understand ISS exposures dynamically for crew career planning, and insure that the regulatory requirements of keeping exposure as low as reasonably achievable (ALARA) are adequately implemented. This is especially true as ISS nears some form of completion with increasing complexity, resulting in a larger drag coefficient, and requiring operation at higher altitudes with increased exposure rates. In this paper ISS environmental model is configured for 11A (circa mid 2005), and uses non-isotropic and dynamic geomagnetic transmission and trapped proton models. ISS 11A and LEO model validations are important steps in preparation for the design and validation for the next generation manned vehicles. While the described cutoff rigidity, trapped proton and electron formalisms as coded in a package named GEORAD (GEOmagnetic RADiation) and a web interface named OLTARIS (On-line Tool for the Assessment of Radiation in Space) are applicable to the LEO, Medium Earth Orbit (MEO) and Geosynchronous Earth Orbit (GEO) at quiet solar periods, in this report, the validation of the models using available measurements are limited to STS and ISS nominal operational altitudes (300-400 km) range at LEO where the dominant fields within the vehicle are the trapped proton and attenuated Galactic Cosmic Ray (GCR) ions. The described formalism applies to trapped electron at LEO, MEO and GEO as well. Due to the scarcity of available electron measurements, the trapped electron capabilities of the GEORAD are not discussed in this report, but are accessible through OLTARIS web interface. GEORAD and OLTARIS interests are in the study of long term effects (i.e. a meaningful portion of solar cycle). Therefore, GEORAD does not incorporate any short term external field contribution due to solar activity. Finally, we apply these environmental models to selected target points within ISS 6A (circa early 2001), 7A (circa late 2001), and 11A during its passage through the South Atlantic Anomaly (SAA) to assess the validity of the environmental models at ISS altitudes.
Towards seamless workflows in agile data science
NASA Astrophysics Data System (ADS)
Klump, J. F.; Robertson, J.
2017-12-01
Agile workflows are a response to projects with requirements that may change over time. They prioritise rapid and flexible responses to change, preferring to adapt to changes in requirements rather than predict them before a project starts. This suits the needs of research very well because research is inherently agile in its methodology. The adoption of agile methods has made collaborative data analysis much easier in a research environment fragmented across institutional data stores, HPC, personal and lab computers and more recently cloud environments. Agile workflows use tools that share a common worldview: in an agile environment, there may be more that one valid version of data, code or environment in play at any given time. All of these versions need references and identifiers. For example, a team of developers following the git-flow conventions (github.com/nvie/gitflow) may have several active branches, one for each strand of development. These workflows allow rapid and parallel iteration while maintaining identifiers pointing to individual snapshots of data and code and allowing rapid switching between strands. In contrast, the current focus of versioning in research data management is geared towards managing data for reproducibility and long-term preservation of the record of science. While both are important goals in the persistent curation domain of the institutional research data infrastructure, current tools emphasise planning over adaptation and can introduce unwanted rigidity by insisting on a single valid version or point of truth. In the collaborative curation domain of a research project, things are more fluid. However, there is no equivalent to the "versioning iso-surface" of the git protocol for the management and versioning of research data. At CSIRO we are developing concepts and tools for the agile management of software code and research data for virtual research environments, based on our experiences of actual data analytics projects in the geosciences. We use code management that allows researchers to interact with the code through tools like Jupyter Notebooks while data are held in an object store. Our aim is an architecture allowing seamless integration of code development, data management, and data processing in virtual research environments.
Transport calculations and accelerator experiments needed for radiation risk assessment in space.
Sihver, Lembit
2008-01-01
The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.
Uncertainty Assessment of Hypersonic Aerothermodynamics Prediction Capability
NASA Technical Reports Server (NTRS)
Bose, Deepak; Brown, James L.; Prabhu, Dinesh K.; Gnoffo, Peter; Johnston, Christopher O.; Hollis, Brian
2011-01-01
The present paper provides the background of a focused effort to assess uncertainties in predictions of heat flux and pressure in hypersonic flight (airbreathing or atmospheric entry) using state-of-the-art aerothermodynamics codes. The assessment is performed for four mission relevant problems: (1) shock turbulent boundary layer interaction on a compression corner, (2) shock turbulent boundary layer interaction due a impinging shock, (3) high-mass Mars entry and aerocapture, and (4) high speed return to Earth. A validation based uncertainty assessment approach with reliance on subject matter expertise is used. A code verification exercise with code-to-code comparisons and comparisons against well established correlations is also included in this effort. A thorough review of the literature in search of validation experiments is performed, which identified a scarcity of ground based validation experiments at hypersonic conditions. In particular, a shortage of useable experimental data at flight like enthalpies and Reynolds numbers is found. The uncertainty was quantified using metrics that measured discrepancy between model predictions and experimental data. The discrepancy data is statistically analyzed and investigated for physics based trends in order to define a meaningful quantified uncertainty. The detailed uncertainty assessment of each mission relevant problem is found in the four companion papers.
Exploration of Uncertainty in Glacier Modelling
NASA Technical Reports Server (NTRS)
Thompson, David E.
1999-01-01
There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.
Pump CFD code validation tests
NASA Technical Reports Server (NTRS)
Brozowski, L. A.
1993-01-01
Pump CFD code validation tests were accomplished by obtaining nonintrusive flow characteristic data at key locations in generic current liquid rocket engine turbopump configurations. Data were obtained with a laser two-focus (L2F) velocimeter at scaled design flow. Three components were surveyed: a 1970's-designed impeller, a 1990's-designed impeller, and a four-bladed unshrouded inducer. Two-dimensional velocities were measured upstream and downstream of the two impellers. Three-dimensional velocities were measured upstream, downstream, and within the blade row of the unshrouded inducer.
NASA Astrophysics Data System (ADS)
Dartevelle, S.
2006-12-01
Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements; hence, volcanic explosive phenomenology is inadequately constrained in terms of initial and inflow conditions. Consequently, little to no real-time data exist to Verify and Validate computer codes developed to model these geophysical events as a whole. However, code Verification and Validation remains a necessary step, particularly when volcanologists use numerical data for mitigation of volcanic hazards as more often performed nowadays. The Verification and Validation (V&V) process formally assesses the level of 'credibility' of numerical results produced within a range of specific applications. The first step, Verification, is 'the process of determining that a model implementation accurately represents the conceptual description of the model', which requires either exact analytical solutions or highly accurate simplified experimental data. The second step, Validation, is 'the process of determining the degree to which a model is an accurate representation of the real world', which requires complex experimental data of the 'real world' physics. The Verification step is rather simple to formally achieve, while, in the 'real world' explosive volcanism context, the second step, Validation, is about impossible. Hence, instead of validating computer code against the whole large-scale unconstrained volcanic phenomenology, we rather suggest to focus on the key physics which control these volcanic clouds, viz., momentum-driven supersonic jets and multiphase turbulence. We propose to compare numerical results against a set of simple but well-constrained analog experiments, which uniquely and unambiguously represent these two key-phenomenology separately. Herewith, we use GMFIX (Geophysical Multiphase Flow with Interphase eXchange, v1.62), a set of multiphase- CFD FORTRAN codes, which have been recently redeveloped to meet the strict Quality Assurance, verification, and validation requirements from the Office of Civilian Radioactive Waste Management of the US Dept of Energy. GMFIX solves Navier-Stokes and energy partial differential equations for each phase with appropriate turbulence and interfacial coupling between phases. For momentum-driven single- to multi-phase underexpanded jets, the position of the first Mach disk is known empirically as a function of both the pressure ratio, K, and the particle mass fraction, Phi at the nozzle. Namely, the higher K, the further downstream the Mach disk and the higher Phi, the further upstream the first Mach disk. We show that GMFIX captures these two essential features. In addition, GMFIX displays all the properties found in these jets, such as expansion fans, incident and reflected shocks, and subsequent downstream mach discs, which make this code ideal for further investigations of equivalent volcanological phenomena. One of the other most challenging aspects of volcanic phenomenology is the multiphase nature of turbulence. We also validated GMFIX in comparing the velocity profiles and turbulence quantities against well constrained analog experiments. The velocity profiles agree with the analog ones as well as these of production of turbulent quantities. Overall, the Verification and the Validation experiments although inherently challenging suggest GMFIX captures the most essential dynamical properties of multiphase and supersonic flows and jets.
Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J
2015-01-01
Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284
Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D
2016-05-01
OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes.
A Radiation Shielding Code for Spacecraft and Its Validation
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.
2000-01-01
The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.
Experimental aerothermodynamic research of hypersonic aircraft
NASA Technical Reports Server (NTRS)
Cleary, Joseph W.
1987-01-01
The 2-D and 3-D advance computer codes being developed for use in the design of such hypersonic aircraft as the National Aero-Space Plane require comparison of the computational results with a broad spectrum of experimental data to fully assess the validity of the codes. This is particularly true for complex flow fields with control surfaces present and for flows with separation, such as leeside flow. Therefore, the objective is to provide a hypersonic experimental data base required for validation of advanced computational fluid dynamics (CFD) computer codes and for development of more thorough understanding of the flow physics necessary for these codes. This is being done by implementing a comprehensive test program for a generic all-body hypersonic aircraft model in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel over a broad range of test conditions to obtain pertinent surface and flowfield data. Results from the flow visualization portion of the investigation are presented.
Application of the DART Code for the Assessment of Advanced Fuel Behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rest, J.; Totev, T.
2007-07-01
The Dispersion Analysis Research Tool (DART) code is a dispersion fuel analysis code that contains mechanistically-based fuel and reaction-product swelling models, a one dimensional heat transfer analysis, and mechanical deformation models. DART has been used to simulate the irradiation behavior of uranium oxide, uranium silicide, and uranium molybdenum aluminum dispersion fuels, as well as their monolithic counterparts. The thermal-mechanical DART code has been validated against RERTR tests performed in the ATR for irradiation data on interaction thickness, fuel, matrix, and reaction product volume fractions, and plate thickness changes. The DART fission gas behavior model has been validated against UO{sub 2}more » fission gas release data as well as measured fission gas-bubble size distributions. Here DART is utilized to analyze various aspects of the observed bubble growth in U-Mo/Al interaction product. (authors)« less
Monte Carlo tests of the ELIPGRID-PC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less
Validation of multi-temperature nozzle flow code NOZNT
NASA Technical Reports Server (NTRS)
Park, Chul; Lee, Seung-Ho
1993-01-01
A computer code NOZNT (Nozzle in n-Temperatures), which calculates one-dimensional flows of partially dissociated and ionized air in an expanding nozzle, is tested against five existing sets of experimental data. The code accounts for: a) the differences among various temperatures, i.e., translational-rotational temperature, vibrational temperatures of individual molecular species, and electron-electronic temperature, b) radiative cooling, and c) the effects of impurities. The experimental data considered are: 1) the sodium line reversal and 2) the electron temperature and density data, both obtained in a shock tunnel, and 3) the spectroscopic emission data, 4) electron beam data on vibrational temperature, and 5) mass-spectrometric species concentration data, all obtained in arc-jet wind tunnels. It is shown that the impurities are most likely responsible for the observed phenomena in shock tunnels. For the arc-jet flows, impurities are inconsequential and the NOZNT code is validated by numerically reproducing the experimental data.
Gabbett, Tim J
2013-08-01
The physical demands of rugby league, rugby union, and American football are significantly increased through the large number of collisions players are required to perform during match play. Because of the labor-intensive nature of coding collisions from video recordings, manufacturers of wearable microsensor (e.g., global positioning system [GPS]) units have refined the technology to automatically detect collisions, with several sport scientists attempting to use these microsensors to quantify the physical demands of collision sports. However, a question remains over the validity of these microtechnology units to quantify the contact demands of collision sports. Indeed, recent evidence has shown significant differences in the number of "impacts" recorded by microtechnology units (GPSports) and the actual number of collisions coded from video. However, a separate study investigated the validity of a different microtechnology unit (minimaxX; Catapult Sports) that included GPS and triaxial accelerometers, and also a gyroscope and magnetometer, to quantify collisions. Collisions detected by the minimaxX unit were compared with video-based coding of the actual events. No significant differences were detected in the number of mild, moderate, and heavy collisions detected via the minimaxX units and those coded from video recordings of the actual event. Furthermore, a strong correlation (r = 0.96, p < 0.01) was observed between collisions recorded via the minimaxX units and those coded from video recordings of the event. These findings demonstrate that only one commercially available and wearable microtechnology unit (minimaxX) can be considered capable of offering a valid method of quantifying the contact loads that typically occur in collision sports. Until such validation research is completed, sport scientists should be circumspect of the ability of other units to perform similar functions.
ERIC Educational Resources Information Center
Simkin, Mark G.
2008-01-01
Data-validation routines enable computer applications to test data to ensure their accuracy, completeness, and conformance to industry or proprietary standards. This paper presents five programming cases that require students to validate five different types of data: (1) simple user data entries, (2) UPC codes, (3) passwords, (4) ISBN numbers, and…
Chen, Chia-Yen; Lee, Phil H; Castro, Victor M; Minnier, Jessica; Charney, Alexander W; Stahl, Eli A; Ruderfer, Douglas M; Murphy, Shawn N; Gainer, Vivian; Cai, Tianxi; Jones, Ian; Pato, Carlos N; Pato, Michele T; Landén, Mikael; Sklar, Pamela; Perlis, Roy H; Smoller, Jordan W
2018-04-18
Bipolar disorder (BD) is a heritable mood disorder characterized by episodes of mania and depression. Although genomewide association studies (GWAS) have successfully identified genetic loci contributing to BD risk, sample size has become a rate-limiting obstacle to genetic discovery. Electronic health records (EHRs) represent a vast but relatively untapped resource for high-throughput phenotyping. As part of the International Cohort Collection for Bipolar Disorder (ICCBD), we previously validated automated EHR-based phenotyping algorithms for BD against in-person diagnostic interviews (Castro et al. Am J Psychiatry 172:363-372, 2015). Here, we establish the genetic validity of these phenotypes by determining their genetic correlation with traditionally ascertained samples. Case and control algorithms were derived from structured and narrative text in the Partners Healthcare system comprising more than 4.6 million patients over 20 years. Genomewide genotype data for 3330 BD cases and 3952 controls of European ancestry were used to estimate SNP-based heritability (h 2 g ) and genetic correlation (r g ) between EHR-based phenotype definitions and traditionally ascertained BD cases in GWAS by the ICCBD and Psychiatric Genomics Consortium (PGC) using LD score regression. We evaluated BD cases identified using 4 EHR-based algorithms: an NLP-based algorithm (95-NLP) and three rule-based algorithms using codified EHR with decreasing levels of stringency-"coded-strict", "coded-broad", and "coded-broad based on a single clinical encounter" (coded-broad-SV). The analytic sample comprised 862 95-NLP, 1968 coded-strict, 2581 coded-broad, 408 coded-broad-SV BD cases, and 3 952 controls. The estimated h 2 g were 0.24 (p = 0.015), 0.09 (p = 0.064), 0.13 (p = 0.003), 0.00 (p = 0.591) for 95-NLP, coded-strict, coded-broad and coded-broad-SV BD, respectively. The h 2 g for all EHR-based cases combined except coded-broad-SV (excluded due to 0 h 2 g ) was 0.12 (p = 0.004). These h 2 g were lower or similar to the h 2 g observed by the ICCBD + PGCBD (0.23, p = 3.17E-80, total N = 33,181). However, the r g between ICCBD + PGCBD and the EHR-based cases were high for 95-NLP (0.66, p = 3.69 × 10 -5 ), coded-strict (1.00, p = 2.40 × 10 -4 ), and coded-broad (0.74, p = 8.11 × 10 -7 ). The r g between EHR-based BD definitions ranged from 0.90 to 0.98. These results provide the first genetic validation of automated EHR-based phenotyping for BD and suggest that this approach identifies cases that are highly genetically correlated with those ascertained through conventional methods. High throughput phenotyping using the large data resources available in EHRs represents a viable method for accelerating psychiatric genetic research.
CBT Specific Process in Exposure-Based Treatments: Initial Examination in a Pediatric OCD Sample
Benito, Kristen Grabill; Conelea, Christine; Garcia, Abbe M.; Freeman, Jennifer B.
2012-01-01
Cognitive-Behavioral theory and empirical support suggest that optimal activation of fear is a critical component for successful exposure treatment. Using this theory, we developed coding methodology for measuring CBT-specific process during exposure. We piloted this methodology in a sample of young children (N = 18) who previously received CBT as part of a randomized controlled trial. Results supported the preliminary reliability and predictive validity of coding variables with 12 week and 3 month treatment outcome data, generally showing results consistent with CBT theory. However, given our limited and restricted sample, additional testing is warranted. Measurement of CBT-specific process using this methodology may have implications for understanding mechanism of change in exposure-based treatments and for improving dissemination efforts through identification of therapist behaviors associated with improved outcome. PMID:22523609
A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.
2006-01-01
The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.
Analysis of electrophoresis performance
NASA Technical Reports Server (NTRS)
Roberts, G. O.
1984-01-01
The SAMPLE computer code models electrophoresis separation in a wide range of conditions. Results are included for steady three dimensional continuous flow electrophoresis (CFE), time dependent gel and acetate film experiments in one or two dimensions and isoelectric focusing in one dimension. The code evolves N two dimensional radical concentration distributions in time, or distance down a CFE chamber. For each time or distance increment, there are six stages, successively obtaining the pH distribution, the corresponding degrees of ionization for each radical, the conductivity, the electric field and current distribution, and the flux components in each direction for each separate radical. The final stage is to update the radical concentrations. The model formulation for ion motion in an electric field ignores activity effects, and is valid only for low concentrations; for larger concentrations the conductivity is, therefore, also invalid.
Hwang, Y Joseph; Shariff, Salimah Z; Gandhi, Sonja; Wald, Ron; Clark, Edward; Fleet, Jamie L; Garg, Amit X
2012-01-01
Objective To evaluate the validity of the International Classification of Diseases, Tenth Revision (ICD-10) code N17x for acute kidney injury (AKI) in elderly patients in two settings: at presentation to the emergency department and at hospital admission. Design A population-based retrospective validation study. Setting Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum creatinine measurements at presentation to the emergency department (n=36 049) or hospital admission (n=38 566). The baseline serum creatinine measurement was a median of 102 and 39 days prior to presentation to the emergency department and hospital admission, respectively. Main outcome measures Sensitivity, specificity and positive and negative predictive values of ICD-10 diagnostic coding algorithms for AKI using a reference standard based on changes in serum creatinine from the baseline value. Median changes in serum creatinine of patients who were code positive and code negative for AKI. Results The sensitivity of the best-performing coding algorithm for AKI (defined as a ≥2-fold increase in serum creatinine concentration) was 37.4% (95% CI 32.1% to 43.1%) at presentation to the emergency department and 61.6% (95% CI 57.5% to 65.5%) at hospital admission. The specificity was greater than 95% in both settings. In patients who were code positive for AKI, the median (IQR) increase in serum creatinine from the baseline was 133 (62 to 288) µmol/l at presentation to the emergency department and 98 (43 to 200) µmol/l at hospital admission. In those who were code negative, the increase in serum creatinine was 2 (−8 to 14) and 6 (−4 to 20) µmol/l, respectively. Conclusions The presence or absence of ICD-10 code N17× differentiates two groups of patients with distinct changes in serum creatinine at the time of a hospital encounter. However, the code underestimates the true incidence of AKI due to a limited sensitivity. PMID:23204077
Gandhi, Sonja; Shariff, Salimah Z; Fleet, Jamie L; Weir, Matthew A; Jain, Arsh K; Garg, Amit X
2012-01-01
Objective To evaluate the validity of the International Classification of Diseases, 10th Revision (ICD-10) diagnosis code for hyponatraemia (E87.1) in two settings: at presentation to the emergency department and at hospital admission. Design Population-based retrospective validation study. Setting Twelve hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Patients aged 66 years and older with serum sodium laboratory measurements at presentation to the emergency department (n=64 581) and at hospital admission (n=64 499). Main outcome measures Sensitivity, specificity, positive predictive value and negative predictive value comparing various ICD-10 diagnostic coding algorithms for hyponatraemia to serum sodium laboratory measurements (reference standard). Median serum sodium values comparing patients who were code positive and code negative for hyponatraemia. Results The sensitivity of hyponatraemia (defined by a serum sodium ≤132 mmol/l) for the best-performing ICD-10 coding algorithm was 7.5% at presentation to the emergency department (95% CI 7.0% to 8.2%) and 10.6% at hospital admission (95% CI 9.9% to 11.2%). Both specificities were greater than 99%. In the two settings, the positive predictive values were 96.4% (95% CI 94.6% to 97.6%) and 82.3% (95% CI 80.0% to 84.4%), while the negative predictive values were 89.2% (95% CI 89.0% to 89.5%) and 87.1% (95% CI 86.8% to 87.4%). In patients who were code positive for hyponatraemia, the median (IQR) serum sodium measurements were 123 (119–126) mmol/l and 125 (120–130) mmol/l in the two settings. In code negative patients, the measurements were 138 (136–140) mmol/l and 137 (135–139) mmol/l. Conclusions The ICD-10 diagnostic code for hyponatraemia differentiates between two groups of patients with distinct serum sodium measurements at both presentation to the emergency department and at hospital admission. However, these codes underestimate the true incidence of hyponatraemia due to low sensitivity. PMID:23274673
Niu, Bolin; Forde, Kimberly A; Goldberg, David S
2015-01-01
Despite the use of administrative data to perform epidemiological and cost-effectiveness research on patients with hepatitis B or C virus (HBV, HCV), there are no data outside of the Veterans Health Administration validating whether International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) codes can accurately identify cirrhotic patients with HBV or HCV. The validation of such algorithms is necessary for future epidemiological studies. We evaluated the positive predictive value (PPV) of ICD-9-CM codes for identifying chronic HBV or HCV among cirrhotic patients within the University of Pennsylvania Health System, a large network that includes a tertiary care referral center, a community-based hospital, and multiple outpatient practices across southeastern Pennsylvania and southern New Jersey. We reviewed a random sample of 200 cirrhotic patients with ICD-9-CM codes for HCV and 150 cirrhotic patients with ICD-9-CM codes for HBV. The PPV of 1 inpatient or 2 outpatient HCV codes was 88.0% (168/191, 95% CI: 82.5-92.2%), while the PPV of 1 inpatient or 2 outpatient HBV codes was 81.3% (113/139, 95% CI: 73.8-87.4%). Several variations of the primary coding algorithm were evaluated to determine if different combinations of inpatient and/or outpatient ICD-9-CM codes could increase the PPV of the coding algorithm. ICD-9-CM codes can identify chronic HBV or HCV in cirrhotic patients with a high PPV and can be used in future epidemiologic studies to examine disease burden and the proper allocation of resources. Copyright © 2014 John Wiley & Sons, Ltd.
Validation of OpenFoam for heavy gas dispersion applications.
Mack, A; Spruijt, M P N
2013-11-15
In the present paper heavy gas dispersion calculations were performed with OpenFoam. For a wind tunnel test case, numerical data was validated with experiments. For a full scale numerical experiment, a code to code comparison was performed with numerical results obtained from Fluent. The validation was performed in a gravity driven environment (slope), where the heavy gas induced the turbulence. For the code to code comparison, a hypothetical heavy gas release into a strongly turbulent atmospheric boundary layer including terrain effects was selected. The investigations were performed for SF6 and CO2 as heavy gases applying the standard k-ɛ turbulence model. A strong interaction of the heavy gas with the turbulence is present which results in a strong damping of the turbulence and therefore reduced heavy gas mixing. Especially this interaction, based on the buoyancy effects, was studied in order to ensure that the turbulence-buoyancy coupling is the main driver for the reduced mixing and not the global behaviour of the turbulence modelling. For both test cases, comparisons were performed between OpenFoam and Fluent solutions which were mainly in good agreement with each other. Beside steady state solutions, the time accuracy was investigated. In the low turbulence environment (wind tunnel test) which for both codes (laminar solutions) was in good agreement, also with the experimental data. The turbulent solutions of OpenFoam were in much better agreement with the experimental results than the Fluent solutions. Within the strong turbulence environment, both codes showed an excellent comparability. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chan, V. S.; Wong, C. P. C.; McLean, A. G.; Luo, G. N.; Wirth, B. D.
2013-10-01
The Xolotl code under development by PSI-SciDAC will enhance predictive modeling capability of plasma-facing materials under burning plasma conditions. The availability and application of experimental data to compare to code-calculated observables are key requirements to validate the breadth and content of physics included in the model and ultimately gain confidence in its results. A dedicated effort has been in progress to collect and organize a) a database of relevant experiments and their publications as previously carried out at sample exposure facilities in US and Asian tokamaks (e.g., DIII-D DiMES, and EAST MAPES), b) diagnostic and surface analysis capabilities available at each device, and c) requirements for future experiments with code validation in mind. The content of this evolving database will serve as a significant resource for the plasma-material interaction (PMI) community. Work supported in part by the US Department of Energy under GA-DE-SC0008698, DE-AC52-07NA27344 and DE-AC05-00OR22725.
Seng, Elizabeth K; Lovejoy, Travis I
2013-12-01
This study psychometrically evaluates the Motivational Interviewing Treatment Integrity Code (MITI) to assess fidelity to motivational interviewing to reduce sexual risk behaviors in people living with HIV/AIDS. 74 sessions from a pilot randomized controlled trial of motivational interviewing to reduce sexual risk behaviors in people living with HIV were coded with the MITI. Participants reported sexual behavior at baseline, 3-month, and 6-months. Regarding reliability, excellent inter-rater reliability was achieved for measures of behavior frequency across the 12 sessions coded by both coders; global scales demonstrated poor intraclass correlations, but adequate percent agreement. Regarding validity, principle components analyses indicated that a two-factor model accounted for an adequate amount of variance in the data. These factors were associated with decreases in sexual risk behaviors after treatment. The MITI is a reliable and valid measurement of treatment fidelity for motivational interviewing targeting sexual risk behaviors in people living with HIV/AIDS.
NASA Technical Reports Server (NTRS)
Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.
1991-01-01
The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.
Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors
Epiney, A.; Canepa, S.; Zerkak, O.; ...
2016-11-02
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Leyland, Jane
2014-01-01
In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.
Methodology, status, and plans for development and assessment of the RELAP5 code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, G.W.; Riemke, R.A.
1997-07-01
RELAP/MOD3 is a computer code used for the simulation of transients and accidents in light-water nuclear power plants. The objective of the program to develop and maintain RELAP5 was and is to provide the U.S. Nuclear Regulatory Commission with an independent tool for assessing reactor safety. This paper describes code requirements, models, solution scheme, language and structure, user interface validation, and documentation. The paper also describes the current and near term development program and provides an assessment of the code`s strengths and limitations.
Nonlinear wave vacillation in the atmosphere
NASA Technical Reports Server (NTRS)
Antar, Basil N.
1987-01-01
The problem of vacillation in a baroclinically unstable flow field is studied through the time evolution of a single nonlinearly unstable wave. To this end a computer code is being developed to solve numerically for the time evolution of the amplitude of such a wave. The final working code will be the end product resulting from the development of a heirarchy of codes with increasing complexity. The first code in this series was completed and is undergoing several diagnostic analyses to verify its validity. The development of this code is detailed.
Three-dimensional water droplet trajectory code validation using an ECS inlet geometry
NASA Technical Reports Server (NTRS)
Breer, Marlin D.; Goodman, Mark P.
1993-01-01
A task was completed under NASA contract, the purpose of which was to validate a three-dimensional particle trajectory code with existing test data obtained from the Icing Research Tunnel at NASA-LeRC. The geometry analyzed was a flush-mounted environmental control system (ECS) inlet. Results of the study indicated good overall agreement between analytical predictions and wind tunnel test results at most flight conditions. Difficulties were encountered when predicting impingement characteristics of the droplets less than or equal to 13.5 microns in diameter. This difficulty was corrected to some degree by modifications to a module of the particle trajectory code; however, additional modifications will be required to accurately predict impingement characteristics of smaller droplets.
Validating a Monotonically-Integrated Large Eddy Simulation Code for Subsonic Jet Acoustics
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Bridges, James
2017-01-01
The results of subsonic jet validation cases for the Naval Research Lab's Jet Engine Noise REduction (JENRE) code are reported. Two set points from the Tanna matrix, set point 3 (Ma = 0.5, unheated) and set point 7 (Ma = 0.9, unheated) are attempted on three different meshes. After a brief discussion of the JENRE code and the meshes constructed for this work, the turbulent statistics for the axial velocity are presented and compared to experimental data, with favorable results. Preliminary simulations for set point 23 (Ma = 0.5, Tj=T1 = 1.764) on one of the meshes are also described. Finally, the proposed configuration for the farfield noise prediction with JENRE's Ffowcs-Williams Hawking solver are detailed.
NASA Astrophysics Data System (ADS)
Lezberg, Erwin A.; Mularz, Edward J.; Liou, Meng-Sing
1991-03-01
The objectives and accomplishments of research in chemical reacting flows, including both experimental and computational problems are described. The experimental research emphasizes the acquisition of reliable reacting-flow data for code validation, the development of chemical kinetics mechanisms, and the understanding of two-phase flow dynamics. Typical results from two nonreacting spray studies are presented. The computational fluid dynamics (CFD) research emphasizes the development of efficient and accurate algorithms and codes, as well as validation of methods and modeling (turbulence and kinetics) for reacting flows. Major developments of the RPLUS code and its application to mixing concepts, the General Electric combustor, and the Government baseline engine for the National Aerospace Plane are detailed. Finally, the turbulence research in the newly established Center for Modeling of Turbulence and Transition (CMOTT) is described.
Sharifi, Mona; Krishanswami, Shanthi; McPheeters, Melissa L
2013-12-30
To identify and assess billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify acute bronchospasm in administrative and claims databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to bronchospasm, wheeze and acute asthma. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics. Our searches identified 677 citations of which 38 met our inclusion criteria. In these 38 studies, the most commonly used ICD-9 code was 493.x. Only 3 studies reported any validation methods for the identification of bronchospasm, wheeze or acute asthma in administrative and claims databases; all were among pediatric populations and only 2 offered any validation statistics. Some of the outcome definitions utilized were heterogeneous and included other disease based diagnoses, such as bronchiolitis and pneumonia, which are typically of an infectious etiology. One study offered the validation of algorithms utilizing Emergency Department triage chief complaint codes to diagnose acute asthma exacerbations with ICD-9 786.07 (wheezing) revealing the highest sensitivity (56%), specificity (97%), PPV (93.5%) and NPV (76%). There is a paucity of studies reporting rigorous methods to validate algorithms for the identification of bronchospasm in administrative data. The scant validated data available are limited in their generalizability to broad-based populations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mertens, H W; Milburn, N J; Collins, W E
2000-12-01
Two practical color vision tests were developed and validated for use in screening Air Traffic Control Specialist (ATCS) applicants for work at en route center or terminal facilities. The development of the tests involved careful reproduction/simulation of color-coded materials from the most demanding, safety-critical color task performed in each type of facility. The tests were evaluated using 106 subjects with normal color vision and 85 with color vision deficiency. The en route center test, named the Flight Progress Strips Test (FPST), required the identification of critical red/black coding in computer printing and handwriting on flight progress strips. The terminal option test, named the Aviation Lights Test (ALT), simulated red/green/white aircraft lights that must be identified in night ATC tower operations. Color-coding is a non-redundant source of safety-critical information in both tasks. The FPST was validated by direct comparison of responses to strip reproductions with responses to the original flight progress strips and a set of strips selected independently. Validity was high; Kappa = 0.91 with original strips as the validation criterion and 0.86 with different strips. The light point stimuli of the ALT were validated physically with a spectroradiometer. The reliabilities of the FPST and ALT were estimated with Chronbach's alpha as 0.93 and 0.98, respectively. The high job-relevance, validity, and reliability of these tests increases the effectiveness and fairness of ATCS color vision testing.
Computer Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.
Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R
2017-06-01
Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dokhane, A.; Canepa, S.; Ferroukhi, H.
For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less
Gauld, Ian C.; Giaquinto, J. M.; Delashmitt, J. S.; ...
2016-01-01
Destructive radiochemical assay measurements of spent nuclear fuel rod segments from an assembly irradiated in the Three Mile Island unit 1 (TMI-1) pressurized water reactor have been performed at Oak Ridge National Laboratory (ORNL). Assay data are reported for five samples from two fuel rods of the same assembly. The TMI-1 assembly was a 15 X 15 design with an initial enrichment of 4.013 wt% 235U, and the measured samples achieved burnups between 45.5 and 54.5 gigawatt days per metric ton of initial uranium (GWd/t). Measurements were performed mainly using inductively coupled plasma mass spectrometry after elemental separation via highmore » performance liquid chromatography. High precision measurements were achieved using isotope dilution techniques for many of the lanthanides, uranium, and plutonium isotopes. Measurements are reported for more than 50 different isotopes and 16 elements. One of the two TMI-1 fuel rods measured in this work had been measured previously by Argonne National Laboratory (ANL), and these data have been widely used to support code and nuclear data validation. Recently, ORNL provided an important opportunity to independently cross check results against previous measurements performed at ANL. The measured nuclide concentrations are used to validate burnup calculations using the SCALE nuclear systems modeling and simulation code suite. These results show that the new measurements provide reliable benchmark data for computer code validation.« less
Simard, Marc; Sirois, Caroline; Candas, Bernard
2018-05-01
To validate and compare performance of an International Classification of Diseases, tenth revision (ICD-10) version of a combined comorbidity index merging conditions of Charlson and Elixhauser measures against individual measures in the prediction of 30-day mortality. To select a weight derivation method providing optimal performance across ICD-9 and ICD-10 coding systems. Using 2 adult population-based cohorts of patients with hospital admissions in ICD-9 (2005, n=337,367) and ICD-10 (2011, n=348,820), we validated a combined comorbidity index by predicting 30-day mortality with logistic regression. To appreciate performance of the Combined index and both individual measures, factors impacting indices performance such as population characteristics and weight derivation methods were accounted for. We applied 3 scoring methods (Van Walraven, Schneeweiss, and Charlson) and determined which provides best predictive values. Combined index [c-statistics: 0.853 (95% confidence interval: CI, 0.848-0.856)] performed better than original Charlson [0.841 (95% CI, 0.835-0.844)] or Elixhauser [0.841 (95% CI, 0.837-0.844)] measures on ICD-10 cohort. All weight derivation methods provided close high discrimination results for the Combined index (Van Walraven: 0.852, Schneeweiss: 0.851, Charlson: 0.849). Results were consistent across both coding systems. The Combined index remains valid with both ICD-9 and ICD-10 coding systems and the 3 weight derivation methods evaluated provided consistent high performance across those coding systems.
Recent Progress and Future Plans for Fusion Plasma Synthetic Diagnostics Platform
NASA Astrophysics Data System (ADS)
Shi, Lei; Kramer, Gerrit; Tang, William; Tobias, Benjamin; Valeo, Ernest; Churchill, Randy; Hausammann, Loic
2015-11-01
The Fusion Plasma Synthetic Diagnostics Platform (FPSDP) is a Python package developed at the Princeton Plasma Physics Laboratory. It is dedicated to providing an integrated programmable environment for applying a modern ensemble of synthetic diagnostics to the experimental validation of fusion plasma simulation codes. The FPSDP will allow physicists to directly compare key laboratory measurements to simulation results. This enables deeper understanding of experimental data, more realistic validation of simulation codes, quantitative assessment of existing diagnostics, and new capabilities for the design and optimization of future diagnostics. The Fusion Plasma Synthetic Diagnostics Platform now has data interfaces for the GTS and XGC-1 global particle-in-cell simulation codes with synthetic diagnostic modules including: (i) 2D and 3D Reflectometry; (ii) Beam Emission Spectroscopy; and (iii) 1D Electron Cyclotron Emission. Results will be reported on the delivery of interfaces for the global electromagnetic PIC code GTC, the extended MHD M3D-C1 code, and the electromagnetic hybrid NOVAK eigenmode code. Progress toward development of a more comprehensive 2D Electron Cyclotron Emission module will also be discussed. This work is supported by DOE contract #DEAC02-09CH11466.
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...
2017-03-20
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
Numerical Analysis of Dusty-Gas Flows
NASA Astrophysics Data System (ADS)
Saito, T.
2002-02-01
This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.
This paper summarizes the findings from Phase Ib of the Offshore Code Comparison, Collaboration, Continued with Correlation (OC5) project. OC5 is a project run under the International Energy Agency (IEA) Wind Research Task 30, and is focused on validating the tools used for modelling offshore wind systems through the comparison of simulated responses of select offshore wind systems (and components) to physical test data. For Phase Ib of the project, simulated hydrodynamic loads on a flexible cylinder fixed to a sloped bed were validated against test measurements made in the shallow water basin at the Danish Hydraulic Institute (DHI) withmore » support from the Technical University of Denmark (DTU). The first phase of OC5 examined two simple cylinder structures (Phase Ia and Ib) to focus on validation of hydrodynamic models used in the various tools before moving on to more complex offshore wind systems and the associated coupled physics. As a result, verification and validation activities such as these lead to improvement of offshore wind modelling tools, which will enable the development of more innovative and cost-effective offshore wind designs.« less
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...
2016-10-13
This paper summarizes the findings from Phase Ib of the Offshore Code Comparison, Collaboration, Continued with Correlation (OC5) project. OC5 is a project run under the International Energy Agency (IEA) Wind Research Task 30, and is focused on validating the tools used for modelling offshore wind systems through the comparison of simulated responses of select offshore wind systems (and components) to physical test data. For Phase Ib of the project, simulated hydrodynamic loads on a flexible cylinder fixed to a sloped bed were validated against test measurements made in the shallow water basin at the Danish Hydraulic Institute (DHI) withmore » support from the Technical University of Denmark (DTU). The first phase of OC5 examined two simple cylinder structures (Phase Ia and Ib) to focus on validation of hydrodynamic models used in the various tools before moving on to more complex offshore wind systems and the associated coupled physics. As a result, verification and validation activities such as these lead to improvement of offshore wind modelling tools, which will enable the development of more innovative and cost-effective offshore wind designs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernigan, Dann A.; Blanchat, Thomas K.
It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less
Chan, Leighton; Shumway-Cook, Anne; Yorkston, Kathryn M; Ciol, Marcia A; Dudgeon, Brian J; Hoffman, Jeanne M
2005-05-01
To design and validate a methodology that identifies secondary conditions using International Classification of Disease, 9th Revision (ICD-9) codes. Secondary conditions were identified through a literature search and a survey of Washington State physiatrists. These conditions were translated into ICD-9 codes and this list was then validated against a national sample of Medicare survey respondents with differing levels of mobility and activities of daily living (ADL) disability. National survey. Participants (N=9731) in the 1999 Medicare Current Beneficiary Survey with no, mild, moderate, and severe mobility and ADL disability. Not applicable. Percentage of survey respondents with a secondary condition. The secondary conditions were grouped into 4 categories: medical, psychosocial, musculoskeletal, and dysphagia related (problems associated with difficulty in swallowing). Our literature search and survey of 26 physiatrists identified 64 secondary conditions, including depression, decubitus ulcers, and deconditioning. Overall, 70.4% of all survey respondents were treated for a secondary condition. We found a significant relation between increasing mobility as well as ADL disability and increasing numbers of secondary conditions (chi 2 test for trend, P <.001). This relation existed for all categories of secondary conditions: medical (chi 2 test for trend, P <.001), psychosocial (chi 2 test for trend, P <.001), musculoskeletal (chi 2 test for trend, P <.001), and dysphagia related (chi 2 test for trend, P <.001). We created a valid ICD-9-based methodology that identified secondary conditions in Medicare survey respondents and discriminated between people with different degrees of disability. This methodology will be useful for health services researchers who study the frequency and impact of secondary conditions.
Cracking the barcode of fullerene-like cortical microcolumns.
Tozzi, Arturo; Peters, James F; Ori, Ottorino
2017-03-22
Artificial neural systems and nervous graph theoretical analysis rely upon the stance that the neural code is embodied in logic circuits, e.g., spatio-temporal sequences of ON/OFF spiking neurons. Nevertheless, this assumption does not fully explain complex brain functions. Here we show how nervous activity, other than logic circuits, could instead depend on topological transformations and symmetry constraints occurring at the micro-level of the cortical microcolumn, i.e., the embryological, anatomical and functional basic unit of the brain. Tubular microcolumns can be flattened in fullerene-like two-dimensional lattices, equipped with about 80 nodes standing for pyramidal neurons where neural computations take place. We show how the countless possible combinations of activated neurons embedded in the lattice resemble a barcode. Despite the fact that further experimental verification is required in order to validate our claim, different assemblies of firing neurons might have the appearance of diverse codes, each one responsible for a single mental activity. A two-dimensional fullerene-like lattice, grounded on simple topological changes standing for pyramidal neurons' activation, not just displays analogies with the real microcolumn's microcircuitry and the neural connectome, but also the potential for the manufacture of plastic, robust and fast artificial networks in robotic forms of full-fledged neural systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Peng, Mingkai; Chen, Guanmin; Kaplan, Gilaad G; Lix, Lisa M; Drummond, Neil; Lucyk, Kelsey; Garies, Stephanie; Lowerison, Mark; Weibe, Samuel; Quan, Hude
2016-09-01
Electronic medical records (EMR) can be a cost-effective source for hypertension surveillance. However, diagnosis of hypertension in EMR is commonly under-coded and warrants the needs to review blood pressure and antihypertensive drugs for hypertension case identification. We included all the patients actively registered in The Health Improvement Network (THIN) database, UK, on 31 December 2011. Three case definitions using diagnosis code, antihypertensive drug prescriptions and abnormal blood pressure, respectively, were used to identify hypertension patients. We compared the prevalence and treatment rate of hypertension in THIN with results from Health Survey for England (HSE) in 2011. Compared with prevalence reported by HSE (29.7%), the use of diagnosis code alone (14.0%) underestimated hypertension prevalence. The use of any of the definitions (38.4%) or combination of antihypertensive drug prescriptions and abnormal blood pressure (38.4%) had higher prevalence than HSE. The use of diagnosis code or two abnormal blood pressure records with a 2-year period (31.1%) had similar prevalence and treatment rate of hypertension with HSE. Different definitions should be used for different study purposes. The definition of 'diagnosis code or two abnormal blood pressure records with a 2-year period' could be used for hypertension surveillance in THIN. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling
NASA Astrophysics Data System (ADS)
Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.
2017-01-01
Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.
Caregiver Activation and Home Hospice Nurse Communication in Advanced Cancer Care.
Dingley, Catherine E; Clayton, Margaret; Lai, Djin; Doyon, Katherine; Reblin, Maija; Ellington, Lee
Activated patients have the skills, knowledge, and confidence to manage their care, resulting in positive outcomes such as lower hospital readmission and fewer adverse consequences due to poor communication with providers. Despite extensive evidence on patient activation, little is known about activation in the home hospice setting, when family caregivers assume more responsibility in care management. We examined caregiver and nurse communication behaviors associated with caregiver activation during home hospice visits of patients with advanced cancer using a prospective observational design. We adapted Street's Activation Verbal Coding tool to caregiver communication and used qualitative thematic analysis to develop codes for nurse communications that preceded and followed each activation statement in 60 audio-recorded home hospice visits. Caregiver communication that reflected activation included demonstrating knowledge regarding the patient/care, describing care strategies, expressing opinions regarding care, requesting explanations of care, expressing concern about the patient, and redirecting the conversation toward the patient. Nurses responded by providing education, reassessing the patient/care environment, validating communications, clarifying care issues, updating/revising care, and making recommendations for future care. Nurses prompted caregiver activation through focused care-specific questions, open-ended questions/statements, and personal questions. Few studies have investigated nurse/caregiver communication in home hospice, and, to our knowledge, no other studies focused on caregiver activation. The current study provides a foundation to develop a framework of caregiver activation through enhanced communication with nurses. Activated caregivers may facilitate patient-centered care through communication with nurses in home hospice, thus resulting in enhanced outcomes for patients with advanced cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vance, J.N.; Holderness, J.H.; James, D.W.
1992-12-01
Waste stream scaling factors based on sampling programs are vulnerable to one or more of the following factors: sample representativeness, analytic accuracy, and measurement sensitivity. As an alternative to sample analyses or as a verification of the sampling results, this project proposes the use of the RADSOURCE code, which accounts for the release of fuel-source radionuclides. Once the release rates of these nuclides from fuel are known, the code develops scaling factors for waste streams based on easily measured Cobalt-60 (Co-60) and Cesium-137 (Cs-137). The project team developed mathematical models to account for the appearance rate of 10CFR61 radionuclides inmore » reactor coolant. They based these models on the chemistry and nuclear physics of the radionuclides involved. Next, they incorporated the models into a computer code that calculates plant waste stream scaling factors based on reactor coolant gamma- isotopic data. Finally, the team performed special sampling at 17 reactors to validate the models in the RADSOURCE code.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guntur, Srinivas; Jonkman, Jason; Sievers, Ryan
This paper presents validation and code-to-code verification of the latest version of the U.S. Department of Energy, National Renewable Energy Laboratory wind turbine aeroelastic engineering simulation tool, FAST v8. A set of 1,141 test cases, for which experimental data from a Siemens 2.3 MW machine have been made available and were in accordance with the International Electrotechnical Commission 61400-13 guidelines, were identified. These conditions were simulated using FAST as well as the Siemens in-house aeroelastic code, BHawC. This paper presents a detailed analysis comparing results from FAST with those from BHawC as well as experimental measurements, using statistics including themore » means and the standard deviations along with the power spectral densities of select turbine parameters and loads. Results indicate a good agreement among the predictions using FAST, BHawC, and experimental measurements. Here, these agreements are discussed in detail in this paper, along with some comments regarding the differences seen in these comparisons relative to the inherent uncertainties in such a model-based analysis.« less
Guntur, Srinivas; Jonkman, Jason; Sievers, Ryan; ...
2017-08-29
This paper presents validation and code-to-code verification of the latest version of the U.S. Department of Energy, National Renewable Energy Laboratory wind turbine aeroelastic engineering simulation tool, FAST v8. A set of 1,141 test cases, for which experimental data from a Siemens 2.3 MW machine have been made available and were in accordance with the International Electrotechnical Commission 61400-13 guidelines, were identified. These conditions were simulated using FAST as well as the Siemens in-house aeroelastic code, BHawC. This paper presents a detailed analysis comparing results from FAST with those from BHawC as well as experimental measurements, using statistics including themore » means and the standard deviations along with the power spectral densities of select turbine parameters and loads. Results indicate a good agreement among the predictions using FAST, BHawC, and experimental measurements. Here, these agreements are discussed in detail in this paper, along with some comments regarding the differences seen in these comparisons relative to the inherent uncertainties in such a model-based analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.
In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less
Ridao-Fernández, Carmen; Ojeda, Joaquín; Benítez-Lugo, Marisa; Sevillano, José Luis
2016-01-01
Objective The aim of this study was to design and validate a functional assessment scale for assisted gait with forearm crutches (Chamorro Assisted Gait Scale—CHAGS) and to assess its reliability in people with sprained ankles. Design Thirty subjects who suffered from sprained ankle (anterior talofibular ligament first and second degree) were included in the study. A modified Delphi technique was used to obtain the content validity. The selected items were: pelvic and scapular girdle dissociation(1), deviation of Center of Gravity(2), crutch inclination(3), steps rhythm(4), symmetry of step length(5), cross support(6), simultaneous support of foot and crutch(7), forearm off(8), facing forward(9) and fluency(10). Two raters twice visualized the gait of the sample subjects which were recorded. The criterion-related validity was determined by correlation between CHAGS and Coding of eight criteria of qualitative gait analysis (Viel Coding). Internal consistency and inter and intra-rater reliability were also tested. Results CHAGS obtained a high and negative correlation with Viel Coding. We obtained a good internal consistency and the intra-class correlation coefficients oscillated between 0.97 and 0.99, while the minimal detectable changes were acceptable. Conclusion CHAGS scale is a valid and reliable tool for assessing assisted gait with crutches in people with sprained ankles to perform partial relief of lower limbs. PMID:27168236
[ENT medicine and head and neck surgery in the G-DRG system 2008].
Franz, D; Roeder, N; Hörmann, K; Alberty, J
2008-09-01
Further developments in the German DRG system have been incorporated into the 2008 version. For ENT medicine and head and neck surgery significant changes concerning coding of diagnoses, medical procedures and concerning the DRG-structure were made. Analysis of relevant diagnoses, medical procedures and G-DRGs in the versions 2007 and 2008 based on the publications of the German DRG institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). Changes for 2008 focussed on the development of DRG structure, DRG validation and codes for medical procedures. The outcome of these changes for German hospitals may vary depending on the range of activities. The G-DRG system has gained in complexity again. High demands are made on correct and complete coding of complex ENT and head and neck surgery cases. Quality of case allocation within the G-DRG system has been improved. For standard cases quality of case allocation is adequate. Nevertheless, further adjustments of the G-DRG system especially for cases with complex neck surgery are necessary.
The PP1 binding code: a molecular-lego strategy that governs specificity.
Heroes, Ewald; Lesage, Bart; Görnemann, Janina; Beullens, Monique; Van Meervelt, Luc; Bollen, Mathieu
2013-01-01
Ser/Thr protein phosphatase 1 (PP1) is a single-domain hub protein with nearly 200 validated interactors in vertebrates. PP1-interacting proteins (PIPs) are ubiquitously expressed but show an exceptional diversity in brain, testis and white blood cells. The binding of PIPs is mainly mediated by short motifs that dock to surface grooves of PP1. Although PIPs often contain variants of the same PP1 binding motifs, they differ in the number and combination of docking sites. This molecular-lego strategy for binding to PP1 creates holoenzymes with unique properties. The PP1 binding code can be described as specific, universal, degenerate, nonexclusive and dynamic. PIPs control associated PP1 by interference with substrate recruitment or access to the active site. In addition, some PIPs have a subcellular targeting domain that promotes dephosphorylation by increasing the local concentration of PP1. The diversity of the PP1 interactome and the properties of the PP1 binding code account for the exquisite specificity of PP1 in vivo. © 2012 The Authors Journal compilation © 2012 FEBS.
Validation of Carotid Artery Revascularization Coding in Ontario Health Administrative Databases.
Hussain, Mohamad A; Mamdani, Muhammad; Saposnik, Gustavo; Tu, Jack V; Turkel-Parrella, David; Spears, Julian; Al-Omran, Mohammed
2016-04-02
The positive predictive value (PPV) of carotid endarterectomy (CEA) and carotid artery stenting (CAS) procedure and post-operative complication coding were assessed in Ontario health administrative databases. Between 1 April 2002 and 31 March 2014, a random sample of 428 patients were identified using Canadian Classification of Health Intervention (CCI) procedure codes and Ontario Health Insurance Plan (OHIP) billing codes from administrative data. A blinded chart review was conducted at two high-volume vascular centers to assess the level of agreement between the administrative records and the corresponding patients' hospital charts. PPV was calculated with 95% confidence intervals (CIs) to estimate the validity of CEA and CAS coding, utilizing hospital charts as the gold standard. Sensitivity of CEA and CAS coding were also assessed by linking two independent databases of 540 CEA-treated patients (Ontario Stroke Registry) and 140 CAS-treated patients (single-center CAS database) to administrative records. PPV for CEA ranged from 99% to 100% and sensitivity ranged from 81.5% to 89.6% using CCI and OHIP codes. A CCI code with a PPV of 87% (95% CI, 78.8-92.9) and sensitivity of 92.9% (95% CI, 87.4-96.1) in identifying CAS was also identified. PPV for post-admission complication diagnosis coding was 71.4% (95% CI, 53.7-85.4) for stroke/transient ischemic attack, and 82.4% (95% CI, 56.6-96.2) for myocardial infarction. Our analysis demonstrated that the codes used in administrative databases accurately identify CEA and CAS-treated patients. Researchers can confidently use administrative data to conduct population-based studies of CEA and CAS.
Zebra: An advanced PWR lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, L.; Wu, H.; Zheng, Y.
2012-07-01
This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less
Center for Multiscale Plasma Dynamics: Report on Activities (UCLA/MIT), 2009-2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troy Carter
2011-04-18
The final 'phaseout' year of the CMPD ended July 2010; a no cost extension was requested until May 2011 in order to enable the MIT subcontract funds to be fully utilized. Research progress over this time included verification and validation activities for the BOUT and BOUT++ code, studies of spontaneous reconnection in the VTF facility at MIT, and studies of the interaction between Alfven waves and drift waves in LAPD. The CMPD also hosted the 6th plasma physics winter school in 2010 (jointly with the NSF frontier center the Center for Magnetic Self-Organization, significant funding came from NSF for thismore » most recent iteration of the Winter School).« less
Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul
2017-11-01
The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource-limited settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Video analysis for insight and coding: Examples from tutorials in introductory physics
NASA Astrophysics Data System (ADS)
Scherr, Rachel E.
2009-12-01
The increasing ease of video recording offers new opportunities to create richly detailed records of classroom activities. These recordings, in turn, call for research methodologies that balance generalizability with interpretive validity. This paper shares methodology for two practices of video analysis: (1) gaining insight into specific brief classroom episodes and (2) developing and applying a systematic observational protocol for a relatively large corpus of video data. These two aspects of analytic practice are illustrated in the context of a particular research interest but are intended to serve as general suggestions.
CFD validation experiments for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1992-01-01
A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.
Cluster Analysis of Rat Olfactory Bulb Responses to Diverse Odorants
Falasconi, Matteo; Leon, Michael; Johnson, Brett A.; Marco, Santiago
2012-01-01
In an effort to deepen our understanding of mammalian olfactory coding, we have used an objective method to analyze a large set of odorant-evoked activity maps collected systematically across the rat olfactory bulb to determine whether such an approach could identify specific glomerular regions that are activated by related odorants. To that end, we combined fuzzy c-means clustering methods with a novel validity approach based on cluster stability to evaluate the significance of the fuzzy partitions on a data set of glomerular layer responses to a large diverse group of odorants. Our results confirm the existence of glomerular response clusters to similar odorants. They further indicate a partial hierarchical chemotopic organization wherein larger glomerular regions can be subdivided into smaller areas that are rather specific in their responses to particular functional groups of odorants. These clusters bear many similarities to, as well as some differences from, response domains previously proposed for the glomerular layer of the bulb. These data also provide additional support for the concept of an identity code in the mammalian olfactory system. PMID:22459165
NASA Technical Reports Server (NTRS)
Baker, A. J.; Iannelli, G. S.; Manhardt, Paul D.; Orzechowski, J. A.
1993-01-01
This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.
FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0
Durbin, Timothy J.; Bond, Linda D.
1998-01-01
This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.
Bustamante, Carlos; Ovenden, Jennifer R
2016-01-01
The silver gemfish Rexea solandri is an important economic resource but Vulnerable to overfishing in Australian waters. The complete mitochondrial genome sequence is described from 1.6 million reads obtained via next generation sequencing. The total length of the mitogenome is 16,350 bp comprising 2 rRNA, 13 protein-coding genes, 22 tRNA and 2 non-coding regions. The mitogenome sequence was validated against sequences of PCR fragments and BLAST queries of Genbank. Gene order was equivalent to that found in marine fishes.
2012-01-01
our own work for this discussion. DoD Instruction 5000.61 defines model validation as “the pro - cess of determining the degree to which a model and its... determined that RMAT is highly con - crete code, potentially leading to redundancies in the code itself and making RMAT more difficult to maintain...system con - ceptual models valid, and are the data used to support them adequate? (Chapters Two and Three) 2. Are the sources and methods for populating
1994-08-01
c S c o I -2 b I c 5 ^ A9-I0 Kfc 0) >% 3 .« W) o w O OJ a) 5. u > o ^a 5 ~ ^ o ra to *- w 0 " ro d) iO II...12). The technique was modified to calculate the drag %*<• 4 «.c* A12-II using ihc ncin-minjsivc LUV and sidewall pressure measure- menu rather
Implementation of a Blowing Boundary Condition in the LAURA Code
NASA Technical Reports Server (NTRS)
Thompson, Richard a.; Gnoffo, Peter A.
2008-01-01
Preliminary steps toward modeling a coupled ablation problem using a finite-volume Navier-Stokes code (LAURA) are presented in this paper. Implementation of a surface boundary condition with mass transfer (blowing) is described followed by verification and validation through comparisons with analytic results and experimental data. Application of the code to a carbon-nosetip ablation problem is demonstrated and the results are compared with previously published data. It is concluded that the code and coupled procedure are suitable to support further ablation analyses and studies.
NASA Astrophysics Data System (ADS)
Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.
2006-02-01
Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.
Epstein, Richard H; Dexter, Franklin
2017-07-01
Comorbidity adjustment is often performed during outcomes and health care resource utilization research. Our goal was to develop an efficient algorithm in structured query language (SQL) to determine the Elixhauser comorbidity index. We wrote an SQL algorithm to calculate the Elixhauser comorbidities from Diagnosis Related Group and International Classification of Diseases (ICD) codes. Validation was by comparison to expected comorbidities from combinations of these codes and to the 2013 Nationwide Readmissions Database (NRD). The SQL algorithm matched perfectly with expected comorbidities for all combinations of ICD-9 or ICD-10, and Diagnosis Related Groups. Of 13 585 859 evaluable NRD records, the algorithm matched 100% of the listed comorbidities. Processing time was ∼0.05 ms/record. The SQL Elixhauser code was efficient and computationally identical to the SAS algorithm used for the NRD. This algorithm may be useful where preprocessing of large datasets in a relational database environment and comorbidity determination is desired before statistical analysis. A validated SQL procedure to calculate Elixhauser comorbidities and the van Walraven index from ICD-9 or ICD-10 discharge diagnosis codes has been published. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A QR code identification technology in package auto-sorting system
NASA Astrophysics Data System (ADS)
di, Yi-Juan; Shi, Jian-Ping; Mao, Guo-Yong
2017-07-01
Traditional manual sorting operation is not suitable for the development of Chinese logistics. For better sorting packages, a QR code recognition technology is proposed to identify the QR code label on the packages in package auto-sorting system. The experimental results compared with other algorithms in literatures demonstrate that the proposed method is valid and its performance is superior to other algorithms.
Validation of CFD/Heat Transfer Software for Turbine Blade Analysis
NASA Technical Reports Server (NTRS)
Kiefer, Walter D.
2004-01-01
I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.
Fleet, Jamie L; Shariff, Salimah Z; Gandhi, Sonja; Weir, Matthew A; Jain, Arsh K; Garg, Amit X
2012-01-01
Objectives Evaluate the validity of the International Classification of Diseases, 10th revision (ICD-10) code for hyperkalaemia (E87.5) in two settings: at presentation to an emergency department and at hospital admission. Design Population-based validation study. Setting 12 hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum potassium values at presentation to an emergency department (n=64 579) and at hospital admission (n=64 497). Primary outcome Sensitivity, specificity, positive-predictive value and negative-predictive value. Serum potassium values in patients with and without a hyperkalaemia code (code positive and code negative, respectively). Results The sensitivity of the best-performing ICD-10 coding algorithm for hyperkalaemia (defined by serum potassium >5.5 mmol/l) was 14.1% (95% CI 12.5% to 15.9%) at presentation to an emergency department and 14.6% (95% CI 13.3% to 16.1%) at hospital admission. Both specificities were greater than 99%. In the two settings, the positive-predictive values were 83.2% (95% CI 78.4% to 87.1%) and 62.0% (95% CI 57.9% to 66.0%), while the negative-predictive values were 97.8% (95% CI 97.6% to 97.9%) and 96.9% (95% CI 96.8% to 97.1%). In patients who were code positive for hyperkalaemia, median (IQR) serum potassium values were 6.1 (5.7 to 6.8) mmol/l at presentation to an emergency department and 6.0 (5.1 to 6.7) mmol/l at hospital admission. For code-negative patients median (IQR) serum potassium values were 4.0 (3.7 to 4.4) mmol/l and 4.1 (3.8 to 4.5) mmol/l in each of the two settings, respectively. Conclusions Patients with hospital encounters who were ICD-10 E87.5 hyperkalaemia code positive and negative had distinct higher and lower serum potassium values, respectively. However, due to very low sensitivity, the incidence of hyperkalaemia is underestimated. PMID:23274674
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Validation of ICD-9 Codes for Stable Miscarriage in the Emergency Department.
Quinley, Kelly E; Falck, Ailsa; Kallan, Michael J; Datner, Elizabeth M; Carr, Brendan G; Schreiber, Courtney A
2015-07-01
International Classification of Disease, Ninth Revision (ICD-9) diagnosis codes have not been validated for identifying cases of missed abortion where a pregnancy is no longer viable but the cervical os remains closed. Our goal was to assess whether ICD-9 code "632" for missed abortion has high sensitivity and positive predictive value (PPV) in identifying patients in the emergency department (ED) with cases of stable early pregnancy failure (EPF). We studied females ages 13-50 years presenting to the ED of an urban academic medical center. We approached our analysis from two perspectives, evaluating both the sensitivity and PPV of ICD-9 code "632" in identifying patients with stable EPF. All patients with chief complaints "pregnant and bleeding" or "pregnant and cramping" over a 12-month period were identified. We randomly reviewed two months of patient visits and calculated the sensitivity of ICD-9 code "632" for true cases of stable miscarriage. To establish the PPV of ICD-9 code "632" for capturing missed abortions, we identified patients whose visits from the same time period were assigned ICD-9 code "632," and identified those with actual cases of stable EPF. We reviewed 310 patient records (17.6% of 1,762 sampled). Thirteen of 31 patient records assigned ICD-9 code for missed abortion correctly identified cases of stable EPF (sensitivity=41.9%), and 140 of the 142 patients without EPF were not assigned the ICD-9 code "632"(specificity=98.6%). Of the 52 eligible patients identified by ICD-9 code "632," 39 cases met the criteria for stable EPF (PPV=75.0%). ICD-9 code "632" has low sensitivity for identifying stable EPF, but its high specificity and moderately high PPV are valuable for studying cases of stable EPF in epidemiologic studies using administrative data.
Tohira, Hideo; Jacobs, Ian; Mountain, David; Gibson, Nick; Yeo, Allen; Ueno, Masato; Watanabe, Hiroaki
2011-12-01
The Abbreviated Injury Scale 2008 (AIS 2008) is the most recent injury coding system. A mapping table from a previous AIS 98 to AIS 2008 is available. However, AIS 98 codes that are unmappable to AIS 2008 codes exist in this table. Furthermore, some AIS 98 codes can be mapped to multiple candidate AIS 2008 codes with different severities. We aimed to modify the original table to adjust the severities and to validate these changes. We modified the original table by adding links from unmappable AIS 98 codes to AIS 2008 codes. We applied the original table and our modified table to AIS 98 codes for major trauma patients. We also assigned candidate codes with different severities the weighted averages of their severities as an adjusted severity. The proportion of cases whose injury severity scores (ISSs) were computable were compared. We also compared the agreement of the ISS and New ISS (NISS) between manually determined AIS 2008 codes (MAN) and mapped codes by using our table (MAP) with unadjusted or adjusted severities. All and 72.3% of cases had their ISSs computed by our modified table and the original table, respectively. The agreement between MAN and MAP with respect to the ISS and NISS was substantial (intraclass correlation coefficient = 0.939 for ISS and 0.943 for NISS). Using adjusted severities, the agreements of the ISS and NISS improved to 0.953 (p = 0.11) and 0.963 (p = 0.007), respectively. Our modified mapping table seems to allow more ISSs to be computed than the original table. Severity scores exhibited substantial agreement between MAN and MAP. The use of adjusted severities improved these agreements further.
Niu, Bolin; Forde, Kimberly A; Goldberg, David S.
2014-01-01
Background & Aims Despite the use of administrative data to perform epidemiological and cost-effectiveness research on patients with hepatitis B or C virus (HBV, HCV), there are no data outside of the Veterans Health Administration validating whether International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) codes can accurately identify cirrhotic patients with HBV or HCV. The validation of such algorithms is necessary for future epidemiological studies. Methods We evaluated the positive predictive value (PPV) of ICD-9-CM codes for identifying chronic HBV or HCV among cirrhotic patients within the University of Pennsylvania Health System, a large network that includes a tertiary care referral center, a community-based hospital, and multiple outpatient practices across southeastern Pennsylvania and southern New Jersey. We reviewed a random sample of 200 cirrhotic patients with ICD-9-CM codes for HCV and 150 cirrhotic patients with ICD-9-CM codes for HBV. Results The PPV of 1 inpatient or 2 outpatient HCV codes was 88.0% (168/191, 95% CI: 82.5–92.2%), while the PPV of 1 inpatient or 2 outpatient HBV codes was 81.3% (113/139, 95% CI: 73.8–87.4%). Several variations of the primary coding algorithm were evaluated to determine if different combinations of inpatient and/or outpatient ICD-9-CM codes could increase the PPV of the coding algorithm. Conclusions ICD-9-CM codes can identify chronic HBV or HCV in cirrhotic patients with a high PPV, and can be used in future epidemiologic studies to examine disease burden and the proper allocation of resources. PMID:25335773
NASA Technical Reports Server (NTRS)
Bache, George
1993-01-01
Validation of CFD codes is a critical first step in the process of developing CFD design capability. The MSFC Pump Technology Team has recognized the importance of validation and has thus funded several experimental programs designed to obtain CFD quality validation data. The first data set to become available is for the SSME High Pressure Fuel Turbopump Impeller. LDV Data was taken at the impeller inlet (to obtain a reliable inlet boundary condition) and three radial positions at the impeller discharge. Our CFD code, TASCflow, is used within the Propulsion and Commercial Pump industry as a tool for pump design. The objective of this work, therefore, is to further validate TASCflow for application in pump design. TASCflow was used to predict flow at the impeller discharge for flowrates of 80, 100 and 115 percent of design flow. Comparison to data has been made with encouraging results.
Iwatani, K; Hoshi, M; Shizuma, K; Hiraoka, M; Hayakawa, N; Oka, T; Hasai, H
1994-10-01
A benchmark test of the Monte Carlo neutron and photon transport code system (MCNP) was performed using a bare- and energy-moderated 252Cf fission neutron source which was obtained by transmission through 10-cm-thick iron. An iron plate was used to simulate the effect of the Hiroshima atomic bomb casing. This test includes the activation of indium and nickel for fast neutrons and gold, europium, and cobalt for thermal and epithermal neutrons, which were inserted in the moderators. The latter two activations are also to validate 152Eu and 60Co activity data obtained from the atomic bomb-exposed specimens collected at Hiroshima and Nagasaki, Japan. The neutron moderators used were Lucite and Nylon 6 and the total thickness of each moderator was 60 cm or 65 cm. Measured activity data (reaction yield) of the neutron-irradiated detectors in these moderators decreased to about 1/1,000th or 1/10,000th, which corresponds to about 1,500 m ground distance from the hypocenter in Hiroshima. For all of the indium, nickel, and gold activity data, the measured and calculated values agreed within 25%, and the corresponding values for europium and cobalt were within 40%. From this study, the MCNP code was found to be accurate enough for the bare- and energy-moderated 252Cf neutron activation calculations of these elements using moderators containing hydrogen, carbon, nitrogen, and oxygen.
Deep Space Test Bed for Radiation Studies
NASA Technical Reports Server (NTRS)
Adams, James H.; Christl, Mark; Watts, John; Kuznetsov, Eugene; Lin, Zi-Wei
2006-01-01
A key factor affecting the technical feasibility and cost of missions to Mars or the Moon is the need to protect the crew from ionizing radiation in space. Some analyses indicate that large amounts of spacecraft shielding may be necessary for crew safety. The shielding requirements are driven by the need to protect the crew from Galactic cosmic rays (GCR). Recent research activities aimed at enabling manned exploration have included shielding materials studies. A major goal of this research is to develop accurate radiation transport codes to calculate the shielding effectiveness of materials and to develop effective shielding strategies for spacecraft design. Validation of these models and calculations must be addressed in a relevant radiation environment to assure their technical readiness and accuracy. Test data obtained in the deep space radiation environment can provide definitive benchmarks and yield uncertainty estimates of the radiation transport codes. The two approaches presently used for code validation are ground based testing at particle accelerators and flight tests in high-inclination low-earth orbits provided by the shuttle, free-flyer platforms, or polar-orbiting satellites. These approaches have limitations in addressing all the radiation-shielding issues of deep space missions in both technical and practical areas. An approach based on long duration high altitude polar balloon flights provides exposure to the galactic cosmic ray composition and spectra encountered in deep space at a lower cost and with easier and more frequent access than afforded with spaceflight opportunities. This approach also results in shorter development times than spaceflight experiments, which is important for addressing changing program goals and requirements.
NASA Astrophysics Data System (ADS)
Reiman, A.; Ferraro, N. M.; Turnbull, A.; Park, J. K.; Cerfon, A.; Evans, T. E.; Lanctot, M. J.; Lazarus, E. A.; Liu, Y.; McFadden, G.; Monticello, D.; Suzuki, Y.
2015-06-01
In comparing equilibrium solutions for a DIII-D shot that is amenable to analysis by both stellarator and tokamak three-dimensional (3D) equilibrium codes, a significant disagreement has been seen between solutions of the VMEC stellarator equilibrium code and solutions of tokamak perturbative 3D equilibrium codes. The source of that disagreement has been investigated, and that investigation has led to new insights into the domain of validity of the different equilibrium calculations, and to a finding that the manner in which localized screening currents at low order rational surfaces are handled can affect global properties of the equilibrium solution. The perturbative treatment has been found to break down at surprisingly small perturbation amplitudes due to overlap of the calculated perturbed flux surfaces, and that treatment is not valid in the pedestal region of the DIII-D shot studied. The perturbative treatment is valid, however, further into the interior of the plasma, and flux surface overlap does not account for the disagreement investigated here. Calculated equilibrium solutions for simple model cases and comparison of the 3D equilibrium solutions with those of other codes indicate that the disagreement arises from a difference in handling of localized currents at low order rational surfaces, with such currents being absent in VMEC and present in the perturbative codes. The significant differences in the global equilibrium solutions associated with the presence or absence of very localized screening currents at rational surfaces suggests that it may be possible to extract information about localized currents from appropriate measurements of global equilibrium plasma properties. That would require improved diagnostic capability on the high field side of the tokamak plasma, a region difficult to access with diagnostics.
Denburg, Michelle R.; Haynes, Kevin; Shults, Justine; Lewis, James D.; Leonard, Mary B.
2011-01-01
Purpose Chronic kidney disease (CKD) is a prevalent and important outcome and covariate in pharmacoepidemiology. The Health Improvement Network (THIN) in the U.K. represents a unique resource for population-based studies of CKD. We compiled a valid list of Read codes to identify subjects with moderate to advanced CKD. Methods A cross-sectional validation study was performed to identify codes that best define CKD stages 3–5. All subjects with at least one non-zero measure of serum creatinine after 1-Jan-2002 were included. Estimated glomerular filtration rate (eGFR) was calculated according to the Schwartz formula for subjects <18 years and the Modification of Diet in Renal Disease formula for subjects ≥18 years of age. CKD was defined as an eGFR <60 ml/min/1.73m2 on at least two occasions, more than 90 days apart. Results The laboratory definition identified 230,426 subjects with CKD, for a period prevalence in 2008 of 4.56% (95% CI: 4.54, 4.58). A list of 45 Read codes was compiled which yielded a positive predictive value of 88.9% (95% CI: 88.7, 89.1), sensitivity of 48.8%, negative predictive value of 86.5%, and specificity of 98.2%. Of the 11.1% of subjects with a code who did not meet the laboratory definition, 83.6% had at least one eGFR <60. The most commonly used code was for CKD stage 3. Conclusions The proposed list of codes can be used to accurately identify CKD when serum creatinine data are limited. The most sensitive approach for the detection of CKD is to use this list to supplement creatinine measures. PMID:22020900
Denburg, Michelle R; Haynes, Kevin; Shults, Justine; Lewis, James D; Leonard, Mary B
2011-11-01
Chronic kidney disease (CKD) is a prevalent and important outcome and covariate in pharmacoepidemiology. The Health Improvement Network (THIN) in the UK represents a unique resource for population-based studies of CKD. We compiled a valid list of Read codes to identify subjects with moderate to advanced CKD. A cross-sectional validation study was performed to identify codes that best define CKD Stages 3-5. All subjects with at least one non-zero measure of serum creatinine after 1 January 2002 were included. Estimated glomerular filtration rate (eGFR) was calculated according to the Schwartz formula for subjects aged < 18 years and the Modification of Diet in Renal Disease formula for subjects aged ≥ 18 years. CKD was defined as an eGFR <60 mL/minute/1.73 m² on at least two occasions, more than 90 days apart. The laboratory definition identified 230,426 subjects with CKD, for a period prevalence in 2008 of 4.56% (95%CI, 4.54-4.58). A list of 45 Read codes was compiled, which yielded a positive predictive value of 88.9% (95%CI, 88.7-89.1), sensitivity of 48.8%, negative predictive value of 86.5%, and specificity of 98.2%. Of the 11.1% of subjects with a code who did not meet the laboratory definition, 83.6% had at least one eGFR <60. The most commonly used code was for CKD Stage 3. The proposed list of codes can be used to accurately identify CKD when serum creatinine data are limited. The most sensitive approach for the detection of CKD is to use this list to supplement creatinine measures. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Chen, C. P.
1990-01-01
An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment
NASA Technical Reports Server (NTRS)
Gaffney, Richard L., Jr.; Cutler, Andrew D.
2005-01-01
If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.
Security authentication using phase-encoded nanoparticle structures and polarized light.
Carnicer, Artur; Hassanfiroozi, Amir; Latorre-Carmona, Pedro; Huang, Yi-Pai; Javidi, Bahram
2015-01-15
Phase-encoded nanostructures such as quick response (QR) codes made of metallic nanoparticles are suggested to be used in security and authentication applications. We present a polarimetric optical method able to authenticate random phase-encoded QR codes. The system is illuminated using polarized light, and the QR code is encoded using a phase-only random mask. Using classification algorithms, it is possible to validate the QR code from the examination of the polarimetric signature of the speckle pattern. We used Kolmogorov-Smirnov statistical test and Support Vector Machine algorithms to authenticate the phase-encoded QR codes using polarimetric signatures.
Particle kinetic simulation of high altitude hypervelocity flight
NASA Technical Reports Server (NTRS)
Boyd, Iain; Haas, Brian L.
1994-01-01
Rarefied flows about hypersonic vehicles entering the upper atmosphere or through nozzles expanding into a near vacuum may only be simulated accurately with a direct simulation Monte Carlo (DSMC) method. Under this grant, researchers enhanced the models employed in the DSMC method and performed simulations in support of existing NASA projects or missions. DSMC models were developed and validated for simulating rotational, vibrational, and chemical relaxation in high-temperature flows, including effects of quantized anharmonic oscillators and temperature-dependent relaxation rates. State-of-the-art advancements were made in simulating coupled vibration-dissociation recombination for post-shock flows. Models were also developed to compute vehicle surface temperatures directly in the code rather than requiring isothermal estimates. These codes were instrumental in simulating aerobraking of NASA's Magellan spacecraft during orbital maneuvers to assess heat transfer and aerodynamic properties of the delicate satellite. NASA also depended upon simulations of entry of the Galileo probe into the atmosphere of Jupiter to provide drag and flow field information essential for accurate interpretation of an onboard experiment. Finally, the codes have been used extensively to simulate expanding nozzle flows in low-power thrusters in support of propulsion activities at NASA-Lewis. Detailed comparisons between continuum calculations and DSMC results helped to quantify the limitations of continuum CFD codes in rarefied applications.
Magistri, Marco; Velmeshev, Dmitry; Makhmutova, Madina; Faghihi, Mohammad Ali
2015-01-01
Abstract The underlying genetic variations of late-onset Alzheimer’s disease (LOAD) cases remain largely unknown. A combination of genetic variations with variable penetrance and lifetime epigenetic factors may converge on transcriptomic alterations that drive LOAD pathological process. Transcriptome profiling using deep sequencing technology offers insight into common altered pathways regardless of underpinning genetic or epigenetic factors and thus represents an ideal tool to investigate molecular mechanisms related to the pathophysiology of LOAD. We performed directional RNA sequencing on high quality RNA samples extracted from hippocampi of LOAD and age-matched controls. We further validated our data using qRT-PCR on a larger set of postmortem brain tissues, confirming downregulation of the gene encoding substance P (TAC1) and upregulation of the gene encoding the plasminogen activator inhibitor-1 (SERPINE1). Pathway analysis indicates dysregulation in neural communication, cerebral vasculature, and amyloid-β clearance. Beside protein coding genes, we identified several annotated and non-annotated long noncoding RNAs that are differentially expressed in LOAD brain tissues, three of them are activity-dependent regulated and one is induced by Aβ1 - 42 exposure of human neural cells. Our data provide a comprehensive list of transcriptomics alterations in LOAD hippocampi and warrant holistic approach including both coding and non-coding RNAs in functional studies aimed to understand the pathophysiology of LOAD. PMID:26402107
Open ISEmeter: An open hardware high-impedance interface for potentiometric detection.
Salvador, C; Mesa, M S; Durán, E; Alvarez, J L; Carbajo, J; Mozo, J D
2016-05-01
In this work, a new open hardware interface based on Arduino to read electromotive force (emf) from potentiometric detectors is presented. The interface has been fully designed with the open code philosophy and all documentation will be accessible on web. The paper describes a comprehensive project including the electronic design, the firmware loaded on Arduino, and the Java-coded graphical user interface to load data in a computer (PC or Mac) for processing. The prototype was tested by measuring the calibration curve of a detector. As detection element, an active poly(vinyl chloride)-based membrane was used, doped with cetyltrimethylammonium dodecylsulphate (CTA(+)-DS(-)). The experimental measures of emf indicate Nernstian behaviour with the CTA(+) content of test solutions, as it was described in the literature, proving the validity of the developed prototype. A comparative analysis of performance was made by using the same chemical detector but changing the measurement instrumentation.
FY96-98 Summary Report Mercury: Next Generation Laser for High Energy Density Physics SI-014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayramian, A.; Beach, R.; Bibeau, C.
The scope of the Mercury Laser project encompasses the research, development, and engineering required to build a new generation of diode-pumped solid-state lasers for Inertial Confinement Fusion (ICF). The Mercury Laser will be the first integrated demonstration of laser diodes, crystals, and gas cooling within a scalable laser architecture. This report is intended to summarize the progress accomplished during the first three years of the project. Due to the technological challenges associated with production of 900 nm diode-bars, heatsinks, and high optical-quality Yb:S-FAP crystals, the initial focus of the project was primarily centered on the R&D in these three areas.more » During the third year of the project, the R&D continued in parallel with the development of computer codes, partial activation of the laser, component testing, and code validation where appropriate.« less
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
Computer program BL2D for solving two-dimensional and axisymmetric boundary layers
NASA Technical Reports Server (NTRS)
Iyer, Venkit
1995-01-01
This report presents the formulation, validation, and user's manual for the computer program BL2D. The program is a fourth-order-accurate solution scheme for solving two-dimensional or axisymmetric boundary layers in speed regimes that range from low subsonic to hypersonic Mach numbers. A basic implementation of the transition zone and turbulence modeling is also included. The code is a result of many improvements made to the program VGBLP, which is described in NASA TM-83207 (February 1982), and can effectively supersede it. The code BL2D is designed to be modular, user-friendly, and portable to any machine with a standard fortran77 compiler. The report contains the new formulation adopted and the details of its implementation. Five validation cases are presented. A detailed user's manual with the input format description and instructions for running the code is included. Adequate information is presented in the report to enable the user to modify or customize the code for specific applications.
Calculation and Correlation of the Unsteady Flowfield in a High Pressure Turbine
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Liu, Jong S.; Panovsky, Josef; Keith, Theo G., Jr.; Mehmed, Oral
2002-01-01
Forced vibrations in turbomachinery components can cause blades to crack or fail due to high-cycle fatigue. Such forced response problems will become more pronounced in newer engines with higher pressure ratios and smaller axial gap between blade rows. An accurate numerical prediction of the unsteady aerodynamics phenomena that cause resonant forced vibrations is increasingly important to designers. Validation of the computational fluid dynamics (CFD) codes used to model the unsteady aerodynamic excitations is necessary before these codes can be used with confidence. Recently published benchmark data, including unsteady pressures and vibratory strains, for a high-pressure turbine stage makes such code validation possible. In the present work, a three dimensional, unsteady, multi blade-row, Reynolds-Averaged Navier Stokes code is applied to a turbine stage that was recently tested in a short duration test facility. Two configurations with three operating conditions corresponding to modes 2, 3, and 4 crossings on the Campbell diagram are analyzed. Unsteady pressures on the rotor surface are compared with data.
A CFD validation roadmap for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1992-01-01
A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.
A CFD validation roadmap for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1993-01-01
A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
New higher-order Godunov code for modelling performance of two-stage light gas guns
NASA Technical Reports Server (NTRS)
Bogdanoff, D. W.; Miller, R. J.
1995-01-01
A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.
A generic minimization random allocation and blinding system on web.
Cai, Hongwei; Xia, Jielai; Xu, Dezhong; Gao, Donghuai; Yan, Yongping
2006-12-01
Minimization is a dynamic randomization method for clinical trials. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the validity of conventional analyses and its complexity in implementation. However, both the statistical and clinical validity of minimization were demonstrated in recent studies. Minimization random allocation system integrated with blinding function that could facilitate the implementation of this method in general clinical trials has not been reported. SYSTEM OVERVIEW: The system is a web-based random allocation system using Pocock and Simon minimization method. It also supports multiple treatment arms within a trial, multiple simultaneous trials, and blinding without further programming. This system was constructed with generic database schema design method, Pocock and Simon minimization method and blinding method. It was coded with Microsoft Visual Basic and Active Server Pages (ASP) programming languages. And all dataset were managed with a Microsoft SQL Server database. Some critical programming codes were also provided. SIMULATIONS AND RESULTS: Two clinical trials were simulated simultaneously to test the system's applicability. Not only balanced groups but also blinded allocation results were achieved in both trials. Practical considerations for minimization method, the benefits, general applicability and drawbacks of the technique implemented in this system are discussed. Promising features of the proposed system are also summarized.
Validation of a program for supercritical power plant calculations
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian
2011-12-01
This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
NASA Astrophysics Data System (ADS)
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes reduce the level of uncertainty in their results to the uncertainty in the geophysical initial conditions. Further, when coupled with real--time free--field tsunami measurements from tsunameters, validated codes are the only choice for realistic forecasting of inundation; the consequences of failure are too ghastly to take chances with numerical procedures that have not been validated. We discuss a ten step process of benchmark tests for models used for inundation mapping. The associated methodology and algorithmes have to first be validated with analytical solutions, then verified with laboratory measurements and field data. The models need to be published in the scientific literature in peer-review journals indexed by ISI. While this process may appear onerous, it reflects our state of knowledge, and is the only defensible methodology when human lives are at stake. Synolakis, C.E., and Bernard, E.N, Tsunami science before and beyond Boxing Day 2004, Phil. Trans. R. Soc. A 364 1845, 2231--2263, 2005.
Evidence-Based Reading and Writing Assessment for Dyslexia in Adolescents and Young Adults
Nielsen, Kathleen; Abbott, Robert; Griffin, Whitney; Lott, Joe; Raskind, Wendy; Berninger, Virginia W.
2016-01-01
The same working memory and reading and writing achievement phenotypes (behavioral markers of genetic variants) validated in prior research with younger children and older adults in a multi-generational family genetics study of dyslexia were used to study 81 adolescent and young adults (ages 16 to 25) from that study. Dyslexia is impaired word reading and spelling skills below the population mean and ability to use oral language to express thinking. These working memory predictor measures were given and used to predict reading and writing achievement: Coding (storing and processing) heard and spoken words (phonological coding), read and written words (orthographic coding), base words and affixes (morphological coding), and accumulating words over time (syntax coding); Cross-Code Integration (phonological loop for linking phonological name and orthographic letter codes and orthographic loop for linking orthographic letter codes and finger sequencing codes), and Supervisory Attention (focused and switching attention and self-monitoring during written word finding). Multiple regressions showed that most predictors explained individual difference in at least one reading or writing outcome, but which predictors explained unique variance beyond shared variance depended on outcome. ANOVAs confirmed that research-supported criteria for dyslexia validated for younger children and their parents could be used to diagnose which adolescents and young adults did (n=31) or did not (n=50) meet research criteria for dyslexia. Findings are discussed in reference to the heterogeneity of phenotypes (behavioral markers of genetic variables) and their application to assessment for accommodations and ongoing instruction for adolescents and young adults with dyslexia. PMID:26855554
NASA Astrophysics Data System (ADS)
Patel, Anita; Pulugundla, Gautam; Smolentsev, Sergey; Abdou, Mohamed; Bhattacharyay, Rajendraprasad
2018-04-01
Following the magnetohydrodynamic (MHD) code validation and verification proposal by Smolentsev et al. (Fusion Eng Des 100:65-72, 2015), we perform code to code and code to experiment comparisons between two computational solvers, FLUIDYN and HIMAG, which are presently considered as two of the prospective CFD tools for fusion blanket applications. In such applications, an electrically conducting breeder/coolant circulates in the blanket ducts in the presence of a strong plasma-confining magnetic field at high Hartmann numbers, it{Ha} (it{Ha}^2 is the ratio between electromagnetic and viscous forces) and high interaction parameters, it{N} (it{N} is the ratio of electromagnetic to inertial forces). The main objective of this paper is to provide the scientific and engineering community with common references to assist fusion researchers in the selection of adequate computational means to be used for blanket design and analysis. As an initial validation case, the two codes are applied to the classic problem of a laminar fully developed MHD flows in a rectangular duct. Both codes demonstrate a very good agreement with the analytical solution for it{Ha} up to 15, 000. To address the capabilities of the two codes to properly resolve complex geometry flows, we consider a case of three-dimensional developing MHD flow in a geometry comprising of a series of interconnected electrically conducting rectangular ducts. The computed electric potential distributions for two flows (Case A) it{Ha}=515, it{N}=3.2 and (Case B) it{Ha}=2059, it{N}=63.8 are in very good agreement with the experimental data, while the comparisons for the MHD pressure drop are still unsatisfactory. To better interpret the observed differences, the obtained numerical data are analyzed against earlier theoretical and experimental studies for flows that involve changes in the relative orientation between the flow and the magnetic field.
Jayasinghe, Sanjay; Macartney, Kristine
2013-01-30
Hospital discharge records and laboratory data have shown a substantial early impact from the rotavirus vaccination program that commenced in 2007 in Australia. However, these assessments are affected by the validity and reliability of hospital discharge coding and stool testing to measure the true incidence of hospitalised disease. The aim of this study was to assess the validity of these data sources for disease estimation, both before and after, vaccine introduction. All hospitalisations at a major paediatric centre in children aged <5 years from 2000 to 2009 containing acute gastroenteritis (AGE) ICD 10 AM diagnosis codes were linked to hospital laboratory stool testing data. The validity of the rotavirus-specific diagnosis code (A08.0) and the incidence of hospitalisations attributable to rotavirus by both direct estimation and with adjustments for non-testing and miscoding were calculated for pre- and post-vaccination periods. A laboratory record of stool testing was available for 36% of all AGE hospitalisations (n=4948) the rotavirus code had high specificity (98.4%; 95% CI, 97.5-99.1%) and positive predictive value (96.8%; 94.8-98.3%), and modest sensitivity (61.6%; 58-65.1%). Of all rotavirus test positive hospitalisations only a third had a rotavirus code. The estimated annual average number of rotavirus hospitalisations, following adjustment for non-testing and miscoding was 5- and 6-fold higher than identified, respectively, from testing and coding alone. Direct and adjusted estimates yielded similar percentage reductions in annual average rotavirus hospitalisations of over 65%. Due to the limited use of stool testing and poor sensitivity of the rotavirus-specific diagnosis code routine hospital discharge and laboratory data substantially underestimate the true incidence of rotavirus hospitalisations and absolute vaccine impact. However, this data can still be used to monitor vaccine impact as the effects of miscoding and under-testing appear to be comparable between pre and post vaccination periods. Copyright © 2012 Elsevier Ltd. All rights reserved.
Factor Structure and Incremental Validity of the Enhanced Computer- Administered Tests
1992-07-01
performance in the mechanical maintenance specialties. 14. SUBJECT TERMS Aptitude tests, ASVAB (Armed services vocational aptitude battery), CAT ...Code 11) Attn: Dir, Personnel Systems (Code 12) Attn: Dir, Testing Systems (Code 13) Attn: CAT /ASVABPMO FJB1 COMNAVCRUITCOM FT1 CNET V8 CG MCRD...test, a computerized adaptive testing version of the ASVAB ( CAT -ASVAB), the psychomotor portion of the General Aptitude Test Battery (GATB), and the
ERIC Educational Resources Information Center
Smith, Derrick; Rosenblum, L. Penny
2013-01-01
Introduction: The purpose of the study presented here was the initial validation of a comprehensive set of competencies focused solely on the Nemeth code. Methods: Using the Delphi method, 20 expert panelists were recruited to participate in the study on the basis of their past experience in teaching a university-level course in the Nemeth code.…
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion systems components
NASA Technical Reports Server (NTRS)
1991-01-01
Summarized here is the technical effort and computer code developed during the five year duration of the program for probabilistic structural analysis methods. The summary includes a brief description of the computer code manuals and a detailed description of code validation demonstration cases for random vibrations of a discharge duct, probabilistic material nonlinearities of a liquid oxygen post, and probabilistic buckling of a transfer tube liner.
Solution of nonlinear flow equations for complex aerodynamic shapes
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed
1992-01-01
Solution-adaptive CFD codes based on unstructured methods for 3-D complex geometries in subsonic to supersonic regimes were investigated, and the computed solution data were analyzed in conjunction with experimental data obtained from wind tunnel measurements in order to assess and validate the predictability of the code. Specifically, the FELISA code was assessed and improved in cooperation with NASA Langley and Imperial College, Swansea, U.K.
From Theory to Practice: Measuring end-of-life communication quality using multiple goals theory.
Van Scoy, L J; Scott, A M; Reading, J M; Chuang, C H; Chinchilli, V M; Levi, B H; Green, M J
2017-05-01
To describe how multiple goals theory can be used as a reliable and valid measure (i.e., coding scheme) of the quality of conversations about end-of-life issues. We analyzed conversations from 17 conversations in which 68 participants (mean age=51years) played a game that prompted discussion in response to open-ended questions about end-of-life issues. Conversations (mean duration=91min) were audio-recorded and transcribed. Communication quality was assessed by three coders who assigned numeric scores rating how well individuals accomplished task, relational, and identity goals in the conversation. The coding measure, which results in a quantifiable outcome, yielded strong reliability (intra-class correlation range=0.73-0.89 and Cronbach's alpha range=0.69-0.89 for each of the coded domains) and validity (using multilevel nonlinear modeling, we detected significant variability in scores between games for each of the coded domains, all p-values <0.02). Our coding scheme provides a theory-based measure of end-of-life conversation quality that is superior to other methods of measuring communication quality. Our description of the coding method enables researches to adapt and apply this measure to communication interventions in other clinical contexts. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
The Initial Atmospheric Transport (IAT) Code: Description and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, Charles W.; Bartel, Timothy James
The Initial Atmospheric Transport (IAT) computer code was developed at Sandia National Laboratories as part of their nuclear launch accident consequences analysis suite of computer codes. The purpose of IAT is to predict the initial puff/plume rise resulting from either a solid rocket propellant or liquid rocket fuel fire. The code generates initial conditions for subsequent atmospheric transport calculations. The Initial Atmospheric Transfer (IAT) code has been compared to two data sets which are appropriate to the design space of space launch accident analyses. The primary model uncertainties are the entrainment coefficients for the extended Taylor model. The Titan 34Dmore » accident (1986) was used to calibrate these entrainment settings for a prototypic liquid propellant accident while the recent Johns Hopkins University Applied Physics Laboratory (JHU/APL, or simply APL) large propellant block tests (2012) were used to calibrate the entrainment settings for prototypic solid propellant accidents. North American Meteorology (NAM )formatted weather data profiles are used by IAT to determine the local buoyancy force balance. The IAT comparisons for the APL solid propellant tests illustrate the sensitivity of the plume elevation to the weather profiles; that is, the weather profile is a dominant factor in determining the plume elevation. The IAT code performed remarkably well and is considered validated for neutral weather conditions.« less
Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, Charles J.; Shi, Xizeng
The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executablemore » code.« less
Jolley, Rachel J; Quan, Hude; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J
2015-12-23
Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. All adults (aged ≥ 18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Validity of Diagnostic Codes for Acute Stroke in Administrative Databases: A Systematic Review
McCormick, Natalie; Bhole, Vidula; Lacaille, Diane; Avina-Zubieta, J. Antonio
2015-01-01
Objective To conduct a systematic review of studies reporting on the validity of International Classification of Diseases (ICD) codes for identifying stroke in administrative data. Methods MEDLINE and EMBASE were searched (inception to February 2015) for studies: (a) Using administrative data to identify stroke; or (b) Evaluating the validity of stroke codes in administrative data; and (c) Reporting validation statistics (sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), or Kappa scores) for stroke, or data sufficient for their calculation. Additional articles were located by hand search (up to February 2015) of original papers. Studies solely evaluating codes for transient ischaemic attack were excluded. Data were extracted by two independent reviewers; article quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. Results Seventy-seven studies published from 1976–2015 were included. The sensitivity of ICD-9 430-438/ICD-10 I60-I69 for any cerebrovascular disease was ≥ 82% in most [≥ 50%] studies, and specificity and NPV were both ≥ 95%. The PPV of these codes for any cerebrovascular disease was ≥ 81% in most studies, while the PPV specifically for acute stroke was ≤ 68%. In at least 50% of studies, PPVs were ≥ 93% for subarachnoid haemorrhage (ICD-9 430/ICD-10 I60), 89% for intracerebral haemorrhage (ICD-9 431/ICD-10 I61), and 82% for ischaemic stroke (ICD-9 434/ICD-10 I63 or ICD-9 434&436). For in-hospital deaths, sensitivity was 55%. For cerebrovascular disease or acute stroke as a cause-of-death on death certificates, sensitivity was ≤ 71% in most studies while PPV was ≥ 87%. Conclusions While most cases of prevalent cerebrovascular disease can be detected using 430-438/I60-I69 collectively, acute stroke must be defined using more specific codes. Most in-hospital deaths and death certificates with stroke as a cause-of-death correspond to true stroke deaths. Linking vital statistics and hospitalization data may improve the ascertainment of fatal stroke. PMID:26292280
42 CFR 478.15 - QIO review of changes resulting from DRG validation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner dissatisfied with a change to the diagnostic or procedural coding information made by a QIO as a result of DRG...
The Multitheoretical List of Therapeutic Interventions - 30 items (MULTI-30).
Solomonov, Nili; McCarthy, Kevin S; Gorman, Bernard S; Barber, Jacques P
2018-01-16
To develop a brief version of the Multitheoretical List of Therapeutic Interventions (MULTI-60) in order to decrease completion time burden by approximately half, while maintaining content coverage. Study 1 aimed to select 30 items. Study 2 aimed to examine the reliability and internal consistency of the MULTI-30. Study 3 aimed to validate the MULTI-30 and ensure content coverage. In Study 1, the sample included 186 therapist and 255 patient MULTI ratings, and 164 ratings of sessions coded by trained observers. Internal consistency (Chronbach's alpha and McDonald's omega) was calculated and confirmatory factor analysis was conducted. Psychotherapy experts rated content relevance. Study 2 included a sample of 644 patient and 522 therapist ratings, and 793 codings of psychotherapy sessions. In Study 3, the sample included 33 codings of sessions. A series of regression analyses was conducted to examine replication of previously published findings using the MULTI-30. The MULTI-30 was found valid, reliable, and internally consistent across 2564 ratings examined across the three studies presented. The MULTI-30 a brief and reliable process measure. Future studies are required for further validation.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
14 CFR Appendix B to Part 63 - Flight Navigator Training Course Requirements
Code of Federal Regulations, 2010 CFR
2010-01-01
... include additional subjects in the ground training curriculum, such as international law, flight hygiene... weather reports. Forecasting. International Morse code: Ability to receive code groups of letters and... school subjects. (3) Each instructor who conducts flight training must hold a valid flight navigator...
Using Pulsed Power for Hydrodynamic Code Validation
2001-06-01
Air Force Research Laboratory ( AFRL ). A...bank at the Air Force Research Laboratory ( AFRL ). A cylindrical aluminum liner that is magnetically imploded onto a central target by self-induced...James Degnan, George Kiuttu Air Force Research Laboratory Albuquerque, NM 87117 Abstract As part of ongoing hydrodynamic code
Niedhammer, Isabelle; Chastang, Jean-François; Levy, David; David, Simone; Degioanni, Stéphanie; Theorell, Töres
2008-10-01
To construct and evaluate the validity of a job-exposure matrix (JEM) for psychosocial work factors defined by Karasek's model using national representative data of the French working population. National sample of 24,486 men and women who filled in the Job Content Questionnaire (JCQ) by Karasek measuring the scores of psychological demands, decision latitude, and social support (individual scores) in 2003 (response rate 96.5%). Median values of the three scores in the total sample of men and women were used to define high demands, low latitude, and low support (individual binary exposures). Job title was defined by both occupation and economic activity that were coded using detailed national classifications (PCS and NAF/NACE). Two JEM measures were calculated from the individual scores of demands, latitude and support for each job title: JEM scores (mean of the individual score) and JEM binary exposures (JEM score dichotomized at the median). The analysis of the variance of the individual scores of demands, latitude, and support explained by occupations and economic activities, of the correlation and agreement between individual measures and JEM measures, and of the sensitivity and specificity of JEM exposures, as well as the study of the associations with self-reported health showed a low validity of JEM measures for psychological demands and social support, and a relatively higher validity for decision latitude compared with individual measures. Job-exposure matrix measure for decision latitude might be used as a complementary exposure assessment. Further research is needed to evaluate the validity of JEM for psychosocial work factors.
Nir, Rony-Reuven; Lev, Rina; Moont, Ruth; Granovsky, Yelena; Sprecher, Elliot; Yarnitsky, David
2008-11-01
Multiple studies have supported the usefulness of standardized low-resolution brain electromagnetic tomography (sLORETA) in localizing generators of scalp-recorded potentials. The current study implemented sLORETA on pain event-related potentials, primarily aiming at validating this technique for pain research by identifying well-known pain-related regions. Subsequently, we pointed at investigating the still-debated and ambiguous topic of pain intensity coding at these regions, focusing on their relative impact on subjective pain perception. sLORETA revealed significant activations of the bilateral primary somatosensory (SI) and anterior cingulate cortices and of the contralateral operculoinsular and dorsolateral prefrontal (DLPFC) cortices (P < .05 for each). Activity of these regions, excluding DLPFC, correlated with subjective numerical pain scores (P < .05 for each). However, a multivariate regression analysis (R = .80; P = .024) distinguished the contralateral SI as the only region whose activation magnitude significantly predicted the subjective perception of noxious stimuli (P = .020), further substantiated by a reduced regression model (R = .75, P = .008). Based on (1) correspondence of the pain-activated regions identified by sLORETA with the acknowledged imaging-based pain-network and (2) the contralateral SI proving to be the most contributing region in pain intensity coding, we found sLORETA to be an appropriate tool for relevant pain research and further substantiated the role of SI in pain perception. Because the literature of pain intensity coding offers inconsistent findings, the current article used a novel tool for revisiting this controversial issue. Results suggest that it is the activation magnitude of SI, which solely establishes the significant correlation with subjective pain ratings, in accordance with the classical clinical thinking, relating SI lesions to diminished perception of pain. Although this study cannot support a causal relation between SI activation magnitude and pain perception, such relation might be insinuated.
Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus
2005-01-01
The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.
Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu
2006-01-01
Objectives This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). Methods The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures – all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. Findings The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. Conclusion The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires. PMID:16995935
Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu
2006-09-22
This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures--all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires.
Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data
NASA Technical Reports Server (NTRS)
Schwing, Alan
2012-01-01
Several comparisons of computational fluid dynamics to wind tunnel test data are shown for the purpose of code validation. The wind tunnel test, 05-CA, uses a 7.66% model of NASA's Multi-Purpose Crew Vehicle in the 11-foot test section of the Ames Unitary Plan Wind tunnel. A variety of freestream conditions over four Mach numbers and three angles of attack are considered. Test data comparisons include time-averaged integrated forces and moments, time-averaged static pressure ports on the surface, and Strouhal Number. The applicability of the US3D code to subsonic and transonic flow over a bluff body is assessed on a comprehensive data set. With close comparison, this work validates US3D for highly separated flows similar to those examined here.
Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low-Altitude VLF Transmitter
2007-08-31
latitude) for 3 different grid spacings. 14 8. Low-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...excellent, validating the new FD code. 16 9. High-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...again excellent. 17 10. Low-altitude fields produced by a 20-k.Hz source computed using the FD and TD codes. 17 11. High-altitude fields produced
Field Validation of the Stability Limit of a Multi MW Turbine
NASA Astrophysics Data System (ADS)
Kallesøe, Bjarne S.; Kragh, Knud A.
2016-09-01
Long slender blades of modern multi-megawatt turbines exhibit a flutter like instability at rotor speeds above a critical rotor speed. Knowing the critical rotor speed is crucial to a safe turbine design. The flutter like instability can only be estimated using geometrically non-linear aeroelastic codes. In this study, the estimated rotor speed stability limit of a 7 MW state of the art wind turbine is validated experimentally. The stability limit is estimated using Siemens Wind Powers in-house aeroelastic code, and the results show that the predicted stability limit is within 5% of the experimentally observed limit.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Topp, David A.; Heidegger, Nathan J.; Delaney, Robert A.
1994-01-01
The focus of this task was to validate the ADPAC code for heat transfer calculations. To accomplish this goal, the ADPAC code was modified to allow for a Cartesian coordinate system capability and to add boundary conditions to handle spanwise periodicity and transpiration boundaries. The primary validation case was the film cooled C3X vane. The cooling hole modeling included both a porous region and grid in each discrete hold. Predictions for these models as well as smooth wall compared well with the experimental data.
Calculated criticality for sup 235 U/graphite systems using the VIM Monte Carlo code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, P.J.; Grasseschi, G.L.; Olsen, D.N.
1992-01-01
Calculations for highly enriched uranium and graphite systems gained renewed interest recently for the new production modular high-temperature gas-cooled reactor (MHTGR). Experiments to validate the physics calculations for these systems are being prepared for the Transient Reactor Test Facility (TREAT) reactor at Argonne National Laboratory (ANL-West) and in the Compact Nuclear Power Source facility at Los Alamos National Laboratory. The continuous-energy Monte Carlo code VIM, or equivalently the MCNP code, can utilize fully detailed models of the MHTGR and serve as benchmarks for the approximate multigroup methods necessary in full reactor calculations. Validation of these codes and their associated nuclearmore » data did not exist for highly enriched {sup 235}U/graphite systems. Experimental data, used in development of more approximate methods, dates back to the 1960s. The authors have selected two independent sets of experiments for calculation with the VIM code. The carbon-to-uranium (C/U) ratios encompass the range of 2,000, representative of the new production MHTGR, to the ratio of 10,000 in the fuel of TREAT. Calculations used the ENDF/B-V data.« less
NASA Astrophysics Data System (ADS)
Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor
2017-04-01
As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less
DNA methylation of miRNA coding sequences putatively associated with childhood obesity.
Mansego, M L; Garcia-Lacarte, M; Milagro, F I; Marti, A; Martinez, J A
2017-02-01
Epigenetic mechanisms may be involved in obesity onset and its consequences. The aim of the present study was to evaluate whether DNA methylation status in microRNA (miRNA) coding regions is associated with childhood obesity. DNA isolated from white blood cells of 24 children (identification sample: 12 obese and 12 non-obese) from the Grupo Navarro de Obesidad Infantil study was hybridized in a 450 K methylation microarray. Several CpGs whose DNA methylation levels were statistically different between obese and non-obese were validated by MassArray® in 95 children (validation sample) from the same study. Microarray analysis identified 16 differentially methylated CpGs between both groups (6 hypermethylated and 10 hypomethylated). DNA methylation levels in miR-1203, miR-412 and miR-216A coding regions significantly correlated with body mass index standard deviation score (BMI-SDS) and explained up to 40% of the variation of BMI-SDS. The network analysis identified 19 well-defined obesity-relevant biological pathways from the KEGG database. MassArray® validation identified three regions located in or near miR-1203, miR-412 and miR-216A coding regions differentially methylated between obese and non-obese children. The current work identified three CpG sites located in coding regions of three miRNAs (miR-1203, miR-412 and miR-216A) that were differentially methylated between obese and non-obese children, suggesting a role of miRNA epigenetic regulation in childhood obesity. © 2016 World Obesity Federation.
Dishion, Thomas J; Mun, Chung Jung; Tein, Jenn-Yun; Kim, Hanjoe; Shaw, Daniel S; Gardner, Frances; Wilson, Melvin N; Peterson, Jenene
2017-04-01
This study examined the validity of micro social observations and macro ratings of parent-child interaction in early to middle childhood. Seven hundred and thirty-one families representing multiple ethnic groups were recruited and screened as at risk in the context of Women, Infant, and Children (WIC) Nutritional Supplement service settings. Families were randomly assigned to the Family Checkup (FCU) intervention or the control condition at age 2 and videotaped in structured interactions in the home at ages 2, 3, 4, and 5. Parent-child interaction videotapes were micro-coded using the Relationship Affect Coding System (RACS) that captures the duration of two mutual dyadic states: positive engagement and coercion. Macro ratings of parenting skills were collected after coding the videotapes to assess parent use of positive behavior support and limit setting skills (or lack thereof). Confirmatory factor analyses revealed that the measurement model of macro ratings of limit setting and positive behavior support was not supported by the data, and thus, were excluded from further analyses. However, there was moderate stability in the families' micro social dynamics across early childhood and it showed significant improvements as a function of random assignment to the FCU. Moreover, parent-child dynamics were predictive of chronic behavior problems as rated by parents in middle childhood, but not emotional problems. We conclude with a discussion of the validity of the RACS and on methodological advantages of micro social coding over the statistical limitations of macro rating observations. Future directions are discussed for observation research in prevention science.
NSRD-10: Leak Path Factor Guidance Using MELCOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Humphries, Larry L.
Estimates of the source term from a U.S. Department of Energy (DOE) nuclear facility requires that the analysts know how to apply the simulation tools used, such as the MELCOR code, particularly for a complicated facility that may include an air ventilation system and other active systems that can influence the environmental pathway of the materials released. DOE has designated MELCOR 1.8.5, an unsupported version, as a DOE ToolBox code in its Central Registry, which includes a leak-path-factor guidance report written in 2004 that did not include experimental validation data. To continue to use this MELCOR version requires additional verificationmore » and validations, which may not be feasible from a project cost standpoint. Instead, the recent MELCOR should be used. Without any developer support and lack of experimental data validation, it is difficult to convince regulators that the calculated source term from the DOE facility is accurate and defensible. This research replaces the obsolete version in the 2004 DOE leak path factor guidance report by using MELCOR 2.1 (the latest version of MELCOR with continuing modeling development and user support) and by including applicable experimental data from the reactor safety arena and from applicable experimental data used in the DOE-HDBK-3010. This research provides best practice values used in MELCOR 2.1 specifically for the leak path determination. With these enhancements, the revised leak-path-guidance report should provide confidence to the DOE safety analyst who would be using MELCOR as a source-term determination tool for mitigated accident evaluations.« less
CTF (Subchannel) Calculations and Validation L3:VVI.H2L.P15.01
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Natalie
The goal of the Verification and Validation Implementation (VVI) High to Low (Hi2Lo) process is utilizing a validated model in a high resolution code to generate synthetic data for improvement of the same model in a lower resolution code. This process is useful in circumstances where experimental data does not exist or it is not sufficient in quantity or resolution. Data from the high-fidelity code is treated as calibration data (with appropriate uncertainties and error bounds) which can be used to train parameters that affect solution accuracy in the lower-fidelity code model, thereby reducing uncertainty. This milestone presents a demonstrationmore » of the Hi2Lo process derived in the VVI focus area. The majority of the work performed herein describes the steps of the low-fidelity code used in the process with references to the work detailed in the companion high-fidelity code milestone (Reference 1). The CASL low-fidelity code used to perform this work was Cobra Thermal Fluid (CTF) and the high-fidelity code was STAR-CCM+ (STAR). The master branch version of CTF (pulled May 5, 2017 – Reference 2) was utilized for all CTF analyses performed as part of this milestone. The statistical and VVUQ components of the Hi2Lo framework were performed using Dakota version 6.6 (release date May 15, 2017 – Reference 3). Experimental data from Westinghouse Electric Company (WEC – Reference 4) was used throughout the demonstrated process to compare with the high-fidelity STAR results. A CTF parameter called Beta was chosen as the calibration parameter for this work. By default, Beta is defined as a constant mixing coefficient in CTF and is essentially a tuning parameter for mixing between subchannels. Since CTF does not have turbulence models like STAR, Beta is the parameter that performs the most similar function to the turbulence models in STAR. The purpose of the work performed in this milestone is to tune Beta to an optimal value that brings the CTF results closer to those measured in the WEC experiments.« less
Laminar Heating Validation of the OVERFLOW Code
NASA Technical Reports Server (NTRS)
Lillard, Randolph P.; Dries, Kevin M.
2005-01-01
OVERFLOW, a structured finite difference code, was applied to the solution of hypersonic laminar flow over several configurations assuming perfect gas chemistry. By testing OVERFLOW's capabilities over several configurations encompassing a variety of flow physics a validated laminar heating was produced. Configurations tested were a flat plate at 0 degrees incidence, a sphere, a compression ramp, and the X-38 re-entry vehicle. This variety of test cases shows the ability of the code to predict boundary layer flow, stagnation heating, laminar separation with re-attachment heating, and complex flow over a three-dimensional body. In addition, grid resolutions studies were done to give recommendations for the correct number of off-body points to be applied to generic problems and for wall-spacing values to capture heat transfer and skin friction. Numerical results show good comparison to the test data for all the configurations.
15 CFR Appendix B to Part 30 - AES Filing Codes
Code of Federal Regulations, 2014 CFR
2014-01-01
... charity FS—Foreign Military Sales ZD—North American Free Trade Agreements (NAFTA) duty deferral shipments...—Validated End User Authorization C58CCD—Consumer Communication Devices C59STA—Strategic Trade Authorization Department of Energy/National Nuclear Security Administration (DOE/NNSA) Codes E01—DOE/NNSA Nuclear...
(000) A proposal concerning the valid publication of suprageneric “autonyms”
USDA-ARS?s Scientific Manuscript database
The International Code of Nomenclature for algae, fungi and plants is revised every six years to incorporate decisions of the Nomenclature Section of successive International Botanical Congresses (IBC) on proposals to amend the Code. The proposal in this paper will be considered at the IBC in Shenzh...
Improving Shipbuilding Productivity Through Use of Standards
1978-06-01
ship- building industry. In addition to the more familiar standards (e.g. ASME Boiler and Pressure Vessel Code , IEEE-45, etc.) this will include an...will simply refer- ence valid standards as appropriate (e.g. ASME Boiler and Pressure Vessel Code ), and will hopefully work hand in hand with the
Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Ameri, Ali
2005-01-01
This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.
The Challenges of Qualitatively Coding Ancient Texts
ERIC Educational Resources Information Center
Slingerland, Edward; Chudek, Maciej
2012-01-01
We respond to several important and valid concerns about our study ("The Prevalence of Folk Dualism in Early China," "Cognitive Science" 35: 997-1007) by Klein and Klein, defending our interpretation of our data. We also argue that, despite the undeniable challenges involved in qualitatively coding texts from ancient cultures,…
Convergent Validity of O*NET Holland Code Classifications
ERIC Educational Resources Information Center
Eggerth, Donald E.; Bowles, Shannon M.; Tunick, Roy H.; Andrew, Michael E.
2005-01-01
The interpretive ease and intuitive appeal of the Holland RIASEC typology have made it nearly ubiquitous in vocational guidance settings. Its incorporation into the Occupational Information Network (O*NET) has moved it another step closer to reification. This research investigated the rates of agreement between Holland code classifications from…
Tang, Wan; Chen, Min; Ni, Jin; Yang, Ximin
2011-01-01
The traditional Radio Frequency Identification (RFID) system, in which the information maintained in tags is passive and static, has no intelligent decision-making ability to suit application and environment dynamics. The Second-Generation RFID (2G-RFID) system, referred as 2G-RFID-sys, is an evolution of the traditional RFID system to ensure better quality of service in future networks. Due to the openness of the active mobile codes in the 2G-RFID system, the realization of conveying intelligence brings a critical issue: how can we make sure the backend system will interpret and execute mobile codes in the right way without misuse so as to avoid malicious attacks? To address this issue, this paper expands the concept of Role-Based Access Control (RBAC) by introducing context-aware computing, and then designs a secure middleware for backend systems, named Two-Level Security Enhancement Mechanism or 2L-SEM, in order to ensure the usability and validity of the mobile code through contextual authentication and role analysis. According to the given contextual restrictions, 2L-SEM can filtrate the illegal and invalid mobile codes contained in tags. Finally, a reference architecture and its typical application are given to illustrate the implementation of 2L-SEM in a 2G-RFID system, along with the simulation results to evaluate how the proposed mechanism can guarantee secure execution of mobile codes for the system. PMID:22163983
Tang, Wan; Chen, Min; Ni, Jin; Yang, Ximin
2011-01-01
The traditional Radio Frequency Identification (RFID) system, in which the information maintained in tags is passive and static, has no intelligent decision-making ability to suit application and environment dynamics. The Second-Generation RFID (2G-RFID) system, referred as 2G-RFID-sys, is an evolution of the traditional RFID system to ensure better quality of service in future networks. Due to the openness of the active mobile codes in the 2G-RFID system, the realization of conveying intelligence brings a critical issue: how can we make sure the backend system will interpret and execute mobile codes in the right way without misuse so as to avoid malicious attacks? To address this issue, this paper expands the concept of Role-Based Access Control (RBAC) by introducing context-aware computing, and then designs a secure middleware for backend systems, named Two-Level Security Enhancement Mechanism or 2L-SEM, in order to ensure the usability and validity of the mobile code through contextual authentication and role analysis. According to the given contextual restrictions, 2L-SEM can filtrate the illegal and invalid mobile codes contained in tags. Finally, a reference architecture and its typical application are given to illustrate the implementation of 2L-SEM in a 2G-RFID system, along with the simulation results to evaluate how the proposed mechanism can guarantee secure execution of mobile codes for the system.
Current and anticipated uses of thermal-hydraulic codes in Germany
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teschendorff, V.; Sommer, F.; Depisch, F.
1997-07-01
In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.
2013-01-01
experiments on liquid metal jets . The FronTier-MHD code has been used for simulations of liquid mercury targets for the proposed muon collider...validated through the comparison with experiments on liquid metal jets . The FronTier-MHD code has been used for simulations of liquid mercury targets...FronTier-MHD code have been performed using experimental and theoretical studies of liquid mercury jets in magnetic fields. Experimental studies of a
Three-Dimensional Numerical Analyses of Earth Penetration Dynamics
1979-01-31
Lagrangian formulation based on the HEMP method and has been adapted and validated for treatment of normal-incidence (axisymmetric) impact and...code, is a detailed analysis of the structural response of the EPW. This analysis is generated using a nonlinear dynamic, elastic- plastic finite element...based on the HEMP scheme. Thus, the code has the same material modeling capabilities and abilities to track large scale motion found in the WAVE-L code
Multi-zonal Navier-Stokes code with the LU-SGS scheme
NASA Technical Reports Server (NTRS)
Klopfer, G. H.; Yoon, S.
1993-01-01
The LU-SGS (lower upper symmetric Gauss Seidel) algorithm has been implemented into the Compressible Navier-Stokes, Finite Volume (CNSFV) code and validated with a multizonal Navier-Stokes simulation of a transonic turbulent flow around an Onera M6 transport wing. The convergence rate and robustness of the code have been improved and the computational cost has been reduced by at least a factor of 2 over the diagonal Beam-Warming scheme.
Validation of hydrogen gas stratification and mixing models
Wu, Hsingtzu; Zhao, Haihua
2015-05-26
Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less
A neuronal model of predictive coding accounting for the mismatch negativity.
Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas
2012-03-14
The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spiking excitatory and inhibitory neurons interconnected in a layered cortical architecture with distinct input, predictive, and prediction error units. A spike-timing dependent learning rule, relying upon NMDA receptor synaptic transmission, allows the network to adjust its internal predictions and use a memory of the recent past inputs to anticipate on future stimuli based on transition statistics. We demonstrate that this simple architecture can account for the major empirical properties of the MMN. These include a frequency-dependent response to rare deviants, a response to unexpected repeats in alternating sequences (ABABAA…), a lack of consideration of the global sequence context, a response to sound omission, and a sensitivity of the MMN to NMDA receptor antagonists. Novel predictions are presented, and a new magnetoencephalography experiment in healthy human subjects is presented that validates our key hypothesis: the MMN results from active cortical prediction rather than passive synaptic habituation.
Center for Multiscale Plasma Dynamics: Report on Activities (UCLA/MIT), 2009-2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Troy Alan
2014-10-03
The final “phaseout” year of the CMPD ended July 2010; a no cost extension was requested until May 2011 in order to enable the MIT subcontract funds to be fully utilized. Research progress over this time included verification and validation activities for the BOUT and BOUT++ code, studies of spontaneous reconnection in the VTF facility at MIT, and studies of the interaction between Alfv´en waves and drift waves in LAPD. The CMPD also hosted the 6th plasma physics winter school in 2010 (jointly with the NSF frontier center the Center for Magnetic Self-Organization, significant funding came from NSF for thismore » most recent iteration of theWinter School).« less
NASA Astrophysics Data System (ADS)
Riviere, Nicolas; Ceolato, Romain; Hespel, Laurent
2014-10-01
Onera, the French aerospace lab, develops and models active imaging systems to understand the relevant physical phenomena affecting these systems performance. As a consequence, efforts have been done on the propagation of a pulse through the atmosphere and on target geometries and surface properties. These imaging systems must operate at night in all ambient illumination and weather conditions in order to perform strategic surveillance for various worldwide operations. We have implemented codes for 2D and 3D laser imaging systems. As we aim to image a scene in the presence of rain, snow, fog or haze, we introduce such light-scattering effects in our numerical models and compare simulated images with measurements provided by commercial laser scanners.
42 CFR 476.84 - Changes as a result of DRG validation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...
42 CFR 476.84 - Changes as a result of DRG validation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...
42 CFR 476.84 - Changes as a result of DRG validation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...
42 CFR 476.84 - Changes as a result of DRG validation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...
Virues-Ortega, Javier; Montaño-Fidalgo, Montserrat; Froján-Parga, María Xesús; Calero-Elvira, Ana
2011-12-01
This study analyzes the interobserver agreement and hypothesis-based known-group validity of the Therapist's Verbal Behavior Category System (SISC-INTER). The SISC-INTER is a behavioral observation protocol comprised of a set of verbal categories representing putative behavioral functions of the in-session verbal behavior of a therapist (e.g., discriminative, reinforcing, punishing, and motivational operations). The complete therapeutic process of a clinical case of an individual with marital problems was recorded (10 sessions, 8 hours), and data were arranged in a temporal sequence using 10-min periods. Hypotheses based on the expected performance of the putative behavioral functions portrayed by the SISC-INTER codes across prevalent clinical activities (i.e., assessing, explaining, Socratic method, providing clinical guidance) were tested using autoregressive integrated moving average (ARIMA) models. Known-group validity analyses provided support to all hypotheses. The SISC-INTER may be a useful tool to describe therapist-client interaction in operant terms. The utility of reliable and valid protocols for the descriptive analysis of clinical practice in terms of verbal behavior is discussed. Copyright © 2011. Published by Elsevier Ltd.