Sample records for v codes

  1. Constacyclic codes over the ring F_q+v{F}_q+v2F_q and their applications of constructing new non-binary quantum codes

    NASA Astrophysics Data System (ADS)

    Ma, Fanghui; Gao, Jian; Fu, Fang-Wei

    2018-06-01

    Let R={F}_q+v{F}_q+v2{F}_q be a finite non-chain ring, where q is an odd prime power and v^3=v. In this paper, we propose two methods of constructing quantum codes from (α +β v+γ v2)-constacyclic codes over R. The first one is obtained via the Gray map and the Calderbank-Shor-Steane construction from Euclidean dual-containing (α +β v+γ v2)-constacyclic codes over R. The second one is obtained via the Gray map and the Hermitian construction from Hermitian dual-containing (α +β v+γ v2)-constacyclic codes over R. As an application, some new non-binary quantum codes are obtained.

  2. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less

  3. ICD Social Codes: An Underutilized Resource for Tracking Social Needs.

    PubMed

    Torres, Jacqueline M; Lawlor, John; Colvin, Jeffrey D; Sills, Marion R; Bettenhausen, Jessica L; Davidson, Amber; Cutler, Gretchen J; Hall, Matt; Gottlieb, Laura M

    2017-09-01

    Social determinants of health (SDH) data collected in health care settings could have important applications for clinical decision-making, population health strategies, and the design of performance-based incentives and penalties. One source for cataloging SDH data is the International Statistical Classification of Diseases and Related Health Problems (ICD). To explore how SDH are captured with ICD Ninth revision SDH V codes in a national inpatient discharge database. Data come from the 2013 Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample, a national stratified sample of discharges from 4363 hospitals from 44 US states. We estimate the rate of ICD-9 SDH V code utilization overall and by patient demographics and payer categories. We additionally estimate the rate of SDH V code utilization for: (a) the 5 most common reasons for hospitalization; and (b) the 5 conditions with the highest rates of SDH V code utilization. Fewer than 2% of overall discharges in the National Inpatient Sample were assigned an SDH V code. There were statistically significant differences in the rate of overall SDH V code utilization by age categories, race/ethnicity, sex, and payer (all P<0.001). Nevertheless, SDH V codes were assigned to <7% of discharges in any demographic or payer subgroup. SDH V code utilization was highest for major diagnostic categories related to mental health and alcohol/substance use-related discharges. SDH V codes are infrequently utilized in inpatient settings for discharges other than those related to mental health and alcohol/substance use. Utilization incentives will likely need to be developed to realize the potential benefits of cataloging SDH information.

  4. Neutronic calculation of fast reactors by the EUCLID/V1 integrated code

    NASA Astrophysics Data System (ADS)

    Koltashev, D. A.; Stakhanova, A. A.

    2017-01-01

    This article considers neutronic calculation of a fast-neutron lead-cooled reactor BREST-OD-300 by the EUCLID/V1 integrated code. The main goal of development and application of integrated codes is a nuclear power plant safety justification. EUCLID/V1 is integrated code designed for coupled neutronics, thermomechanical and thermohydraulic fast reactor calculations under normal and abnormal operating conditions. EUCLID/V1 code is being developed in the Nuclear Safety Institute of the Russian Academy of Sciences. The integrated code has a modular structure and consists of three main modules: thermohydraulic module HYDRA-IBRAE/LM/V1, thermomechanical module BERKUT and neutronic module DN3D. In addition, the integrated code includes databases with fuel, coolant and structural materials properties. Neutronic module DN3D provides full-scale simulation of neutronic processes in fast reactors. Heat sources distribution, control rods movement, reactivity level changes and other processes can be simulated. Neutron transport equation in multigroup diffusion approximation is solved. This paper contains some calculations implemented as a part of EUCLID/V1 code validation. A fast-neutron lead-cooled reactor BREST-OD-300 transient simulation (fuel assembly floating, decompression of passive feedback system channel) and cross-validation with MCU-FR code results are presented in this paper. The calculations demonstrate EUCLID/V1 code application for BREST-OD-300 simulating and safety justification.

  5. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2017-12-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a) = s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  6. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2018-03-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a)=s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  7. Initial verification and validation of RAZORBACK - A research reactor transient analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talley, Darren G.

    2015-09-01

    This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less

  8. Certifying Auto-Generated Flight Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen

    2008-01-01

    Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.

  9. ESTEST: A Framework for the Verification and Validation of Electronic Structure Codes

    NASA Astrophysics Data System (ADS)

    Yuan, Gary; Gygi, Francois

    2011-03-01

    ESTEST is a verification and validation (V& V) framework for electronic structure codes that supports Qbox, Quantum Espresso, ABINIT, the Exciting Code and plans support for many more. We discuss various approaches to the electronic structure V& V problem implemented in ESTEST, that are related to parsing, formats, data management, search, comparison and analyses. Additionally, an early experiment in the distribution of V& V ESTEST servers among the electronic structure community will be presented. Supported by NSF-OCI 0749217 and DOE FC02-06ER25777.

  10. STAR-CCM+ Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    2016-09-30

    The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methodsmore » (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.« less

  11. 76 FR 32085 - Medicare Program; Inpatient Psychiatric Facilities Prospective Payment System-Update for Rate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-03

    ... description of comorbidity for chronic renal failure. In addition, we inadvertently omitted from Table 11 the comorbidity code ``V4511'' for chronic renal failure. These changes are not substantive changes to the... heading ``Diagnoses codes,'' for the renal failure, chronic diagnoses codes, replace code ``V451'' with...

  12. Accuracy of the new ICD-9-CM code for "drip-and-ship" thrombolytic treatment in patients with ischemic stroke.

    PubMed

    Tonarelli, Silvina B; Tibbs, Michael; Vazquez, Gabriela; Lakshminarayan, Kamakshi; Rodriguez, Gustavo J; Qureshi, Adnan I

    2012-02-01

    A new International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis code, V45.88, was approved by the Centers for Medicare and Medicaid Services (CMS) on October 1, 2008. This code identifies patients in whom intravenous (IV) recombinant tissue plasminogen activator (rt-PA) is initiated in one hospital's emergency department, followed by transfer within 24 hours to a comprehensive stroke center, a paradigm commonly referred to as "drip-and-ship." This study assessed the use and accuracy of the new V45.88 code for identifying ischemic stroke patients who meet the criteria for drip-and-ship at 2 advanced certified primary stroke centers. Consecutive patients over a 12-month period were identified by primary ICD-9-CM diagnosis codes related to ischemic stroke. The accuracy of V45.88 code utilization using administrative data provided by Health Information Management Services was assessed through a comparison with data collected in prospective stroke registries maintained at each hospital by a trained abstractor. Out of a total of 428 patients discharged from both hospitals with a diagnosis of ischemic stroke, 37 patients were given ICD-9-CM code V45.88. The internally validated data from the prospective stroke database demonstrated that a total of 40 patients met the criteria for drip-and-ship. A concurrent comparison found that 92% (sensitivity) of the patients treated with drip-and-ship were coded with V45.88. None of the non-drip-and-ship stroke cases received the V45.88 code (100% specificity). The new ICD-9-CM code for drip-and-ship appears to have high specificity and sensitivity, allowing effective data collection by the CMS. Copyright © 2012 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  13. Fuel burnup analysis for IRIS reactor using MCNPX and WIMS-D5 codes

    NASA Astrophysics Data System (ADS)

    Amin, E. A.; Bashter, I. I.; Hassan, Nabil M.; Mustafa, S. S.

    2017-02-01

    International Reactor Innovative and Secure (IRIS) reactor is a compact power reactor designed with especial features. It contains Integral Fuel Burnable Absorber (IFBA). The core is heterogeneous both axially and radially. This work provides the full core burn up analysis for IRIS reactor using MCNPX and WIMDS-D5 codes. Criticality calculations, radial and axial power distributions and nuclear peaking factor at the different stages of burnup were studied. Effective multiplication factor values for the core were estimated by coupling MCNPX code with WIMS-D5 code and compared with SAS2H/KENO-V code values at different stages of burnup. The two calculation codes show good agreement and correlation. The values of radial and axial powers for the full core were also compared with published results given by SAS2H/KENO-V code (at the beginning and end of reactor operation). The behavior of both radial and axial power distribution is quiet similar to the other data published by SAS2H/KENO-V code. The peaking factor values estimated in the present work are close to its values calculated by SAS2H/KENO-V code.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Jun Soo; Choi, Yong Joon

    The RELAP-7 code verification and validation activities are ongoing under the code assessment plan proposed in the previous document (INL-EXT-16-40015). Among the list of V&V test problems in the ‘RELAP-7 code V&V RTM (Requirements Traceability Matrix)’, the RELAP-7 7-equation model has been tested with additional demonstration problems and the results of these tests are reported in this document. In this report, we describe the testing process, the test cases that were conducted, and the results of the evaluation.

  15. The power induced effects module: A FORTRAN code which estimates lift increments due to power induced effects for V/STOL flight

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Howard, Kipp E.

    1991-01-01

    A user friendly FORTRAN code that can be used for preliminary design of V/STOL aircraft is described. The program estimates lift increments, due to power induced effects, encountered by aircraft in V/STOL flight. These lift increments are calculated using empirical relations developed from wind tunnel tests and are due to suckdown, fountain, ground vortex, jet wake, and the reaction control system. The code can be used as a preliminary design tool along with NASA Ames' Aircraft Synthesis design code or as a stand-alone program for V/STOL aircraft designers. The Power Induced Effects (PIE) module was validated using experimental data and data computed from lift increment routines. Results are presented for many flat plate models along with the McDonnell Aircraft Company's MFVT (mixed flow vectored thrust) V/STOL preliminary design and a 15 percent scale model of the YAV-8B Harrier V/STOL aircraft. Trends and magnitudes of lift increments versus aircraft height above the ground were predicted well by the PIE module. The code also provided good predictions of the magnitudes of lift increments versus aircraft forward velocity. More experimental results are needed to determine how well the code predicts lift increments as they vary with jet deflection angle and angle of attack. The FORTRAN code is provided in the appendix.

  16. Validity of ICD-9-CM Coding for Identifying Incident Methicillin-Resistant Staphylococcus aureus (MRSA) Infections: Is MRSA Infection Coded as a Chronic Disease?

    PubMed Central

    Schweizer, Marin L.; Eber, Michael R.; Laxminarayan, Ramanan; Furuno, Jon P.; Popovich, Kyle J.; Hota, Bala; Rubin, Michael A.; Perencevich, Eli N.

    2013-01-01

    BACKGROUND AND OBJECTIVE Investigators and medical decision makers frequently rely on administrative databases to assess methicillin-resistant Staphylococcus aureus (MRSA) infection rates and outcomes. The validity of this approach remains unclear. We sought to assess the validity of the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) code for infection with drug-resistant microorganisms (V09) for identifying culture-proven MRSA infection. DESIGN Retrospective cohort study. METHODS All adults admitted to 3 geographically distinct hospitals between January 1, 2001, and December 31, 2007, were assessed for presence of incident MRSA infection, defined as an MRSA-positive clinical culture obtained during the index hospitalization, and presence of the V09 ICD-9-CM code. The k statistic was calculated to measure the agreement between presence of MRSA infection and assignment of the V09 code. Sensitivities, specificities, positive predictive values, and negative predictive values were calculated. RESULTS There were 466,819 patients discharged during the study period. Of the 4,506 discharged patients (1.0%) who had the V09 code assigned, 31% had an incident MRSA infection, 20% had prior history of MRSA colonization or infection but did not have an incident MRSA infection, and 49% had no record of MRSA infection during the index hospitalization or the previous hospitalization. The V09 code identified MRSA infection with a sensitivity of 24% (range, 21%–34%) and positive predictive value of 31% (range, 22%–53%). The agreement between assignment of the V09 code and presence of MRSA infection had a κ coefficient of 0.26 (95% confidence interval, 0.25–0.27). CONCLUSIONS In its current state, the ICD-9-CM code V09 is not an accurate predictor of MRSA infection and should not be used to measure rates of MRSA infection. PMID:21460469

  17. The Legal Implication of Cultural Bias in the Intelligence Testing of Disadvantaged School Children

    ERIC Educational Resources Information Center

    Georgetown Law Journal, 1973

    1973-01-01

    The court cases and legal codes cited in this article include: Brown v. Board of Education, 1954; Hobson v. Hansen, 1967; Diana v. State Board of Education (Calif.), 1970; and, California Education Code, 1972. (SF)

  18. Development of a new EMP code at LANL

    NASA Astrophysics Data System (ADS)

    Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.

    2006-05-01

    A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.

  19. RAZORBACK - A Research Reactor Transient Analysis Code Version 1.0 - Volume 3: Verification and Validation Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talley, Darren G.

    2017-04-01

    This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less

  20. Neutron displacement cross-sections for tantalum and tungsten at energies up to 1 GeV

    NASA Astrophysics Data System (ADS)

    Broeders, C. H. M.; Konobeyev, A. Yu.; Villagrasa, C.

    2005-06-01

    The neutron displacement cross-section has been evaluated for tantalum and tungsten at energies from 10 -5 eV up to 1 GeV. The nuclear optical model, the intranuclear cascade model combined with the pre-equilibrium and evaporation models were used for the calculations. The number of defects produced by recoil atoms nuclei in materials was calculated by the Norgett, Robinson, Torrens model and by the approach combining calculations using the binary collision approximation model and the results of the molecular dynamics simulation. The numerical calculations were done using the NJOY code, the ECIS96 code, the MCNPX code and the IOTA code.

  1. Pediatric complex chronic conditions classification system version 2: updated for ICD-10 and complex medical technology dependence and transplantation

    PubMed Central

    2014-01-01

    Background The pediatric complex chronic conditions (CCC) classification system, developed in 2000, requires revision to accommodate the International Classification of Disease 10th Revision (ICD-10). To update the CCC classification system, we incorporated ICD-9 diagnostic codes that had been either omitted or incorrectly specified in the original system, and then translated between ICD-9 and ICD-10 using General Equivalence Mappings (GEMs). We further reviewed all codes in the ICD-9 and ICD-10 systems to include both diagnostic and procedural codes indicative of technology dependence or organ transplantation. We applied the provisional CCC version 2 (v2) system to death certificate information and 2 databases of health utilization, reviewed the resulting CCC classifications, and corrected any misclassifications. Finally, we evaluated performance of the CCC v2 system by assessing: 1) the stability of the system between ICD-9 and ICD-10 codes using data which included both ICD-9 codes and ICD-10 codes; 2) the year-to-year stability before and after ICD-10 implementation; and 3) the proportions of patients classified as having a CCC in both the v1 and v2 systems. Results The CCC v2 classification system consists of diagnostic and procedural codes that incorporate a new neonatal CCC category as well as domains of complexity arising from technology dependence or organ transplantation. CCC v2 demonstrated close comparability between ICD-9 and ICD-10 and did not detect significant discontinuity in temporal trends of death in the United States. Compared to the original system, CCC v2 resulted in a 1.0% absolute (10% relative) increase in the number of patients identified as having a CCC in national hospitalization dataset, and a 0.4% absolute (24% relative) increase in a national emergency department dataset. Conclusions The updated CCC v2 system is comprehensive and multidimensional, and provides a necessary update to accommodate widespread implementation of ICD-10. PMID:25102958

  2. HIV1 V3 loop hypermutability is enhanced by the guanine usage bias in the part of env gene coding for it.

    PubMed

    Khrustalev, Vladislav Victorovich

    2009-01-01

    Guanine is the most mutable nucleotide in HIV genes because of frequently occurring G to A transitions, which are caused by cytosine deamination in viral DNA minus strands catalyzed by APOBEC enzymes. Distribution of guanine between three codon positions should influence the probability for G to A mutation to be nonsynonymous (to occur in first or second codon position). We discovered that nucleotide sequences of env genes coding for third variable regions (V3 loops) of gp120 from HIV1 and HIV2 have different kinds of guanine usage biases. In the HIV1 reference strain and 100 additionally analyzed HIV1 strains the guanine usage bias in V3 loop coding regions (2G>1G>3G) should lead to elevated nonsynonymous G to A transitions occurrence rates. In the HIV2 reference strain and 100 other HIV2 strains guanine usage bias in V3 loop coding regions (3G>2G>1G) should protect V3 loops from hypermutability. According to the HIV1 and HIV2 V3 alignment, insertion of the sequence enriched with 2G (21 codons in length) occurred during the evolution of HIV1 predecessor, while insertion of the different sequence enriched with 3G (19 codons in length) occurred during the evolution of HIV2 predecessor. The higher is the level of 3G in the V3 coding region, the lower should be the immune escaping mutation occurrence rates. This hypothesis was tested in this study by comparing the guanine usage in V3 loop coding regions from HIV1 fast and slow progressors. All calculations have been performed by our algorithms "VVK In length", "VVK Dinucleotides" and "VVK Consensus" (www.barkovsky.hotmail.ru).

  3. Long non-coding RNA produced by RNA polymerase V determines boundaries of heterochromatin

    PubMed Central

    Böhmdorfer, Gudrun; Sethuraman, Shriya; Rowley, M Jordan; Krzyszton, Michal; Rothi, M Hafiz; Bouzit, Lilia; Wierzbicki, Andrzej T

    2016-01-01

    RNA-mediated transcriptional gene silencing is a conserved process where small RNAs target transposons and other sequences for repression by establishing chromatin modifications. A central element of this process are long non-coding RNAs (lncRNA), which in Arabidopsis thaliana are produced by a specialized RNA polymerase known as Pol V. Here we show that non-coding transcription by Pol V is controlled by preexisting chromatin modifications located within the transcribed regions. Most Pol V transcripts are associated with AGO4 but are not sliced by AGO4. Pol V-dependent DNA methylation is established on both strands of DNA and is tightly restricted to Pol V-transcribed regions. This indicates that chromatin modifications are established in close proximity to Pol V. Finally, Pol V transcription is preferentially enriched on edges of silenced transposable elements, where Pol V transcribes into TEs. We propose that Pol V may play an important role in the determination of heterochromatin boundaries. DOI: http://dx.doi.org/10.7554/eLife.19092.001 PMID:27779094

  4. Organizational Effectiveness Information System (OEIS) User’s Manual

    DTIC Science & Technology

    1986-09-01

    SUBJECT CODES B-l C. LISTING OF VALID RESOURCE SYSTEM CODES C-l »TflerÄ*w»fi*%f*fc**v.nft; ^’.A/.V. A y.A/.AAA«•.*-A/. AAV ...the valid codes used la the Implementation and Design System. MACOM 01 COE 02 DARCOM 03 EUSA 04 FORSCOM 05 HSC 06 HQDA 07 INSCOM 08 MDW 09

  5. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  6. The Navy’s Coupled Atmosphere-Ocean-Wave Prediction System

    DTIC Science & Technology

    2011-04-15

    is provided below. llMurly lurtir llcai Mux. N. M,*!. Heal I luv . .tn.l Wind Sura f»f \\. Hu. Alia il.t.iujn 2«OX) i n.H 1 1 1 T 1 1...DATE IDD-MM YYYY) 15-04-201 I 2. REPORT TYPE Conference Proceeding 3. DATES COVERED (From To) 4. TITLE AND SUBTITLE The Navy’s Coupled...Code 7Q3n A I Division, Code I Author, Code i .y "f^****^ Cv^py>v’V/v^ 1. Release of this paper is approved. 2. To the best knowledge of

  7. Monte Carlo calculations of initial energies of electrons in water irradiated by photons with energies up to 1GeV.

    PubMed

    Todo, A S; Hiromoto, G; Turner, J E; Hamm, R N; Wright, H A

    1982-12-01

    Previous calculations of the initial energies of electrons produced in water irradiated by photons are extended to 1 GeV by including pair and triplet production. Calculations were performed with the Monte Carlo computer code PHOEL-3, which replaces the earlier code, PHOEL-2. Tables of initial electron energies are presented for single interactions of monoenergetic photons at a number of energies from 10 keV to 1 GeV. These tables can be used to compute kerma in water irradiated by photons with arbitrary energy spectra to 1 GeV. In addition, separate tables of Compton-and pair-electron spectra are given over this energy range. The code PHOEL-3 is available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, Oak Ridge, TN 37830.

  8. NEAMS SOFTWARE V&V PLAN FOR THE MARMOT SOFTWARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael R Tonks

    2014-03-01

    In order to ensure the accuracy and quality of the microstructure based materials models being developed in conjunction with MARMOT simulations, MARMOT must undergo exhaustive verification and validation. Only after this process can we confidently rely on the MARMOT code to predict the microstructure evolution within the fuel. Therefore, in this report we lay out a V&V plan for the MARMOT code, highlighting where existing data could be used and where new data is required.

  9. RELAP-7 Code Assessment Plan and Requirement Traceability Matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.

    2016-10-01

    The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, andmore » technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.« less

  10. The EUCLID/V1 Integrated Code for Safety Assessment of Liquid Metal Cooled Fast Reactors. Part 1: Basic Models

    NASA Astrophysics Data System (ADS)

    Mosunova, N. A.

    2018-05-01

    The article describes the basic models included in the EUCLID/V1 integrated code intended for safety analysis of liquid metal (sodium, lead, and lead-bismuth) cooled fast reactors using fuel rods with a gas gap and pellet dioxide, mixed oxide or nitride uranium-plutonium fuel under normal operation, under anticipated operational occurrences and accident conditions by carrying out interconnected thermal-hydraulic, neutronics, and thermal-mechanical calculations. Information about the Russian and foreign analogs of the EUCLID/V1 integrated code is given. Modeled objects, equation systems in differential form solved in each module of the EUCLID/V1 integrated code (the thermal-hydraulic, neutronics, fuel rod analysis module, and the burnup and decay heat calculation modules), the main calculated quantities, and also the limitations on application of the code are presented. The article also gives data on the scope of functions performed by the integrated code's thermal-hydraulic module, using which it is possible to describe both one- and twophase processes occurring in the coolant. It is shown that, owing to the availability of the fuel rod analysis module in the integrated code, it becomes possible to estimate the performance of fuel rods in different regimes of the reactor operation. It is also shown that the models implemented in the code for calculating neutron-physical processes make it possible to take into account the neutron field distribution over the fuel assembly cross section as well as other features important for the safety assessment of fast reactors.

  11. Novel variants of the 5S rRNA genes in Eruca sativa.

    PubMed

    Singh, K; Bhatia, S; Lakshmikumaran, M

    1994-02-01

    The 5S ribosomal RNA (rRNA) genes of Eruca sativa were cloned and characterized. They are organized into clusters of tandemly repeated units. Each repeat unit consists of a 119-bp coding region followed by a noncoding spacer region that separates it from the coding region of the next repeat unit. Our study reports novel gene variants of the 5S rRNA genes in plants. Two families of the 5S rDNA, the 0.5-kb size family and the 1-kb size family, coexist in the E. sativa genome. The 0.5-kb size family consists of the 5S rRNA genes (S4) that have coding regions similar to those of other reported plant 5S rDNA sequences, whereas the 1-kb size family consists of the 5S rRNA gene variants (S1) that exist as 1-kb BamHI tandem repeats. S1 is made up of two variant units (V1 and V2) of 5S rDNA where the BamHI site between the two units is mutated. Sequence heterogeneity among S4, V1, and V2 units exists throughout the sequence and is not limited to the noncoding spacer region only. The coding regions of V1 and V2 show approximately 20% dissimilarity to the coding regions of S4 and other reported plant 5S rDNA sequences. Such a large variation in the coding regions of the 5S rDNA units within the same plant species has been observed for the first time. Restriction site variation is observed between the two size classes of 5S rDNA in E. sativa.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Extension of applicable neutron energy of DARWIN up to 1 GeV.

    PubMed

    Satoh, D; Sato, T; Endo, A; Matsufuji, N; Takada, M

    2007-01-01

    The radiation-dose monitor, DARWIN, needs a set of response functions of the liquid organic scintillator to assess a neutron dose. SCINFUL-QMD is a Monte Carlo based computer code to evaluate the response functions. In order to improve the accuracy of the code, a new light-output function based on the experimental data was developed for the production and transport of protons deuterons, tritons, (3)He nuclei and alpha particles, and incorporated into the code. The applicable energy of DARWIN was extended to 1 GeV using the response functions calculated by the modified SCINFUL-QMD code.

  13. Measurement of absolute response functions and detection efficiencies of an NE213 scintillator up to 600 MeV

    NASA Astrophysics Data System (ADS)

    Kajimoto, Tsuyoshi; Shigyo, Nobuhiro; Sanami, Toshiya; Ishibashi, Kenji; Haight, Robert C.; Fotiades, Nikolaos

    2011-02-01

    Absolute neutron response functions and detection efficiencies of an NE213 liquid scintillator that was 12.7 cm in diameter and 12.7 cm in thickness were measured for neutron energies between 15 and 600 MeV at the Weapons Neutron Research facility of the Los Alamos Neutron Science Center. The experiment was performed with continuous-energy neutrons on a spallation neutron source by 800-MeV proton incidence. The incident neutron flux was measured using a 238U fission ionization chamber. Measured response functions and detection efficiencies were compared with corresponding calculations using the SCINFUL-QMD code. The calculated and experimental values were in good agreement for data below 70 MeV. However, there were discrepancies in the energy region between 70 and 150 MeV. Thus, the code was partly modified and the revised code provided better agreement with the experimental data.

  14. DebuggingDemo v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaines, Sherry

    Intentionally simple buggy code created for use in a debugging demonstration as part of recruiting tech talks. Code exemplifies a buffer overflow, leading to return address corruption. Code also demonstrates unused return value.

  15. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter Andrew

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomicmore » scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.« less

  16. HYDRATE v1.5 OPTION OF TOUGH+ v1.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, George

    HYDRATE v1.5 is a numerical code that for the simulation of the behavior of hydrate-bearing geologic systems, and represents the third update of the code since its first release [Moridis et al., 2008]. It is an option of TOUGH+ v1.5 [Moridis and Pruess, 2014], a successor to the TOUGH2 [Pruess et al., 1999, 2012] family of codes for multi-component, multiphase fluid and heat flow developed at the Lawrence Berkeley National Laboratory. HYDRATE v1.5 needs the TOUGH+ v1.5 core code in order to compile and execute. It is written in standard FORTRAN 95/2003, and can be run on any computational platformmore » (workstation, PC, Macintosh) for which such compilers are available. By solving the coupled equations of mass and heat balance, the fully operational TOUGH+HYDRATE code can model the non-isothermal gas release, phase behavior and flow of fluids and heat under conditions typical of common natural CH 4-hydrate deposits (i.e., in the permafrost and in deep ocean sediments) in complex geological media at any scale (from laboratory to reservoir) at which Darcy's law is valid. TOUGH+HYDRATE v1.5 includes both an equilibrium and a kinetic model of hydrate formation and dissociation. The model accounts for heat and up to four mass components, i.e., water, CH 4, hydrate, and water-soluble inhibitors such as salts or alcohols. These are partitioned among four possible phases (gas phase, liquid phase, ice phase and hydrate phase). Hydrate dissociation or formation, phase changes and the corresponding thermal effects are fully described, as are the effects of inhibitors. The model can describe all possible hydrate dissociation mechanisms, i.e., depressurization, thermal stimulation, salting-out effects and inhibitor-induced effects.« less

  17. TOUGH+ v1.5 Core Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, George J.

    TOUGH+ v1.5 is a numerical code for the simulation of multi-phase, multi-component flow and transport of mass and heat through porous and fractured media, and represents the third update of the code since its first release [Moridis et al., 2008]. TOUGH+ is a successor to the TOUGH2 [Pruess et al., 1991; 2012] family of codes for multi-component, multiphase fluid and heat flow developed at the Lawrence Berkeley National Laboratory. It is written in standard FORTRAN 95/2003, and can be run on any computational platform (workstations, PC, Macintosh). TOUGH+ v1.5 employs dynamic memory allocation, thus minimizing storage requirements. It has amore » completely modular structure, follows the tenets of Object-Oriented Programming (OOP), and involves the advanced features of FORTRAN 95/2003, i.e., modules, derived data types, the use of pointers, lists and trees, data encapsulation, defined operators and assignments, operator extension and overloading, use of generic procedures, and maximum use of the powerful intrinsic vector and matrix processing operations. TOUGH+ v1.5 is the core code for its family of applications, i.e., the part of the code that is common to all its applications. It provides a description of the underlying physics and thermodynamics of non-isothermal flow, of the mathematical and numerical approaches, as well as a detailed explanation of the general (common to all applications) input requirements, options, capabilities and output specifications. The core code cannot run by itself: it needs to be coupled with the code for the specific TOUGH+ application option that describes a particular type of problem. The additional input requirements specific to a particular TOUGH+ application options and related illustrative examples can be found in the corresponding User's Manual.« less

  18. A comparison of the Cray-2 performance before and after the installation of memory pseudo-banking

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald D.; Bailey, David H.

    1987-01-01

    A suite of 13 large Fortran benchmark codes were run on a Cray-2 configured with memory pseudo-banking circuits, and floating point operation rates were measured for each under a variety of system load configurations. These were compared with similar flop measurements taken on the same system before installation of the pseudo-banking. A useful memory access efficiency parameter was defined and calculated for both sets of performance rates, allowing a crude quantitative measure of the improvement in efficiency due to pseudo-banking. Programs were categorized as either highly scalar (S) or highly vectorized (V) and either memory-intensive or register-intensive, giving 4 categories: S-memory, S-register, V-memory, and V-register. Using flop rates as a simple quantifier of these 4 categories, a scatter plot of efficiency gain vs Mflops roughly illustrates the improvement in floating point processing speed due to pseudo-banking. On the Cray-2 system tested this improvement ranged from 1 percent for S-memory codes to about 12 percent for V-memory codes. No significant gains were made for V-register codes, which was to be expected.

  19. Three-dimensional Monte-Carlo simulation of gamma-ray scattering and production in the atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, D.J.

    1989-05-15

    Monte Carlo codes have been developed to simulate gamma-ray scattering and production in the atmosphere. The scattering code simulates interactions of low-energy gamma rays (20 to several hundred keV) from an astronomical point source in the atmosphere; a modified code also simulates scattering in a spacecraft. Four incident spectra, typical of gamma-ray bursts, solar flares, and the Crab pulsar, and 511 keV line radiation have been studied. These simulations are consistent with observations of solar flare radiation scattered from the atmosphere. The production code simulates the interactions of cosmic rays which produce high-energy (above 10 MeV) photons and electrons. Itmore » has been used to calculate gamma-ray and electron albedo intensities at Palestine, Texas and at the equator; the results agree with observations in most respects. With minor modifications this code can be used to calculate intensities of other high-energy particles. Both codes are fully three-dimensional, incorporating a curved atmosphere; the production code also incorporates the variation with both zenith and azimuth of the incident cosmic-ray intensity due to geomagnetic effects. These effects are clearly reflected in the calculated albedo by intensity contrasts between the horizon and nadir, and between the east and west horizons.« less

  20. Identifying Homelessness among Veterans Using VA Administrative Data: Opportunities to Expand Detection Criteria.

    PubMed

    Peterson, Rachel; Gundlapalli, Adi V; Metraux, Stephen; Carter, Marjorie E; Palmer, Miland; Redd, Andrew; Samore, Matthew H; Fargo, Jamison D

    2015-01-01

    Researchers at the U.S. Department of Veterans Affairs (VA) have used administrative criteria to identify homelessness among U.S. Veterans. Our objective was to explore the use of these codes in VA health care facilities. We examined VA health records (2002-2012) of Veterans recently separated from the military and identified as homeless using VA conventional identification criteria (ICD-9-CM code V60.0, VA specific codes for homeless services), plus closely allied V60 codes indicating housing instability. Logistic regression analyses examined differences between Veterans who received these codes. Health care services and co-morbidities were analyzed in the 90 days post-identification of homelessness. VA conventional criteria identified 21,021 homeless Veterans from Operations Enduring Freedom, Iraqi Freedom, and New Dawn (rate 2.5%). Adding allied V60 codes increased that to 31,260 (rate 3.3%). While certain demographic differences were noted, Veterans identified as homeless using conventional or allied codes were similar with regards to utilization of homeless, mental health, and substance abuse services, as well as co-morbidities. Differences were noted in the pattern of usage of homelessness-related diagnostic codes in VA facilities nation-wide. Creating an official VA case definition for homelessness, which would include additional ICD-9-CM and other administrative codes for VA homeless services, would likely allow improved identification of homeless and at-risk Veterans. This also presents an opportunity for encouraging uniformity in applying these codes in VA facilities nationwide as well as in other large health care organizations.

  1. Identifying Homelessness among Veterans Using VA Administrative Data: Opportunities to Expand Detection Criteria

    PubMed Central

    Peterson, Rachel; Gundlapalli, Adi V.; Metraux, Stephen; Carter, Marjorie E.; Palmer, Miland; Redd, Andrew; Samore, Matthew H.; Fargo, Jamison D.

    2015-01-01

    Researchers at the U.S. Department of Veterans Affairs (VA) have used administrative criteria to identify homelessness among U.S. Veterans. Our objective was to explore the use of these codes in VA health care facilities. We examined VA health records (2002-2012) of Veterans recently separated from the military and identified as homeless using VA conventional identification criteria (ICD-9-CM code V60.0, VA specific codes for homeless services), plus closely allied V60 codes indicating housing instability. Logistic regression analyses examined differences between Veterans who received these codes. Health care services and co-morbidities were analyzed in the 90 days post-identification of homelessness. VA conventional criteria identified 21,021 homeless Veterans from Operations Enduring Freedom, Iraqi Freedom, and New Dawn (rate 2.5%). Adding allied V60 codes increased that to 31,260 (rate 3.3%). While certain demographic differences were noted, Veterans identified as homeless using conventional or allied codes were similar with regards to utilization of homeless, mental health, and substance abuse services, as well as co-morbidities. Differences were noted in the pattern of usage of homelessness-related diagnostic codes in VA facilities nation-wide. Creating an official VA case definition for homelessness, which would include additional ICD-9-CM and other administrative codes for VA homeless services, would likely allow improved identification of homeless and at-risk Veterans. This also presents an opportunity for encouraging uniformity in applying these codes in VA facilities nationwide as well as in other large health care organizations. PMID:26172386

  2. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  3. Numerical Simulation of MIG for 42 GHz, 200 kW Gyrotron

    NASA Astrophysics Data System (ADS)

    Singh, Udaybir; Bera, Anirban; Kumar, Narendra; Purohit, L. P.; Sinha, Ashok K.

    2010-06-01

    A triode type magnetron injection gun (MIG) of a 42 GHz, 200 kW gyrotron for an Indian TOKAMAK system is designed by using the commercially available code EGUN. The operating voltages of the modulating anode and the accelerating anode are 29 kV and 65 kV respectively. The operating mode of the gyrotron is TE03 and it is operated in fundamental harmonic. The simulated results of MIG obtained with the EGUN code are validated with another trajectory code TRAK.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downar, Thomas

    This report summarizes the current status of VERA-CS Verification and Validation for PWR Core Follow operation and proposes a multi-phase plan for continuing VERA-CS V&V in FY17 and FY18. The proposed plan recognizes the hierarchical nature of a multi-physics code system such as VERA-CS and the importance of first achieving an acceptable level of V&V on each of the single physics codes before focusing on the V&V of the coupled physics solution. The report summarizes the V&V of each of the single physics codes systems currently used for core follow analysis (ie MPACT, CTF, Multigroup Cross Section Generation, and BISONmore » / Fuel Temperature Tables) and proposes specific actions to achieve a uniformly acceptable level of V&V in FY17. The report also recognizes the ongoing development of other codes important for PWR Core Follow (e.g. TIAMAT, MAMBA3D) and proposes Phase II (FY18) VERA-CS V&V activities in which those codes will also reach an acceptable level of V&V. The report then summarizes the current status of VERA-CS multi-physics V&V for PWR Core Follow and the ongoing PWR Core Follow V&V activities for FY17. An automated procedure and output data format is proposed for standardizing the output for core follow calculations and automatically generating tables and figures for the VERA-CS Latex file. A set of acceptance metrics is also proposed for the evaluation and assessment of core follow results that would be used within the script to automatically flag any results which require further analysis or more detailed explanation prior to being added to the VERA-CS validation base. After the Automation Scripts have been completed and tested using BEAVRS, the VERA-CS plan proposes the Watts Bar cycle depletion cases should be performed with the new cross section library and be included in the first draft of the new VERA-CS manual for release at the end of PoR15. Also, within the constraints imposed by the proprietary nature of plant data, as many as possible of the FY17 AMA Plant Core Follow cases should also be included in the VERA-CS manual at the end of PoR15. After completion of the ongoing development of TIAMAT for fully coupled, full core calculations with VERA-CS / BISON 1.5D, and after the completion of the refactoring of MAMBA3D for CIPS analysis in FY17, selected cases from the VERA-CS validation based should be performed, beginning with the legacy cases of Watts Bar and BEAVRS in PoR16. Finally, as potential Phase III future work some additional considerations are identified for extending the VERA-CS V&V to other reactor types such as the BWR.« less

  5. Development of V/STOL methodology based on a higher order panel method

    NASA Technical Reports Server (NTRS)

    Bhateley, I. C.; Howell, G. A.; Mann, H. W.

    1983-01-01

    The development of a computational technique to predict the complex flowfields of V/STOL aircraft was initiated in which a number of modules and a potential flow aerodynamic code were combined in a comprehensive computer program. The modules were developed in a building-block approach to assist the user in preparing the geometric input and to compute parameters needed to simulate certain flow phenomena that cannot be handled directly within a potential flow code. The PAN AIR aerodynamic code, which is higher order panel method, forms the nucleus of this program. PAN AIR's extensive capability for allowing generalized boundary conditions allows the modules to interact with the aerodynamic code through the input and output files, thereby requiring no changes to the basic code and easy replacement of updated modules.

  6. Analysis of 16S-23S rRNA intergenic spacer regions of Vibrio cholerae and Vibrio mimicus.

    PubMed

    Chun, J; Huq, A; Colwell, R R

    1999-05-01

    Vibrio cholerae identification based on molecular sequence data has been hampered by a lack of sequence variation from the closely related Vibrio mimicus. The two species share many genes coding for proteins, such as ctxAB, and show almost identical 16S DNA coding for rRNA (rDNA) sequences. Primers targeting conserved sequences flanking the 3' end of the 16S and the 5' end of the 23S rDNAs were used to amplify the 16S-23S rRNA intergenic spacer regions of V. cholerae and V. mimicus. Two major (ca. 580 and 500 bp) and one minor (ca. 750 bp) amplicons were consistently generated for both species, and their sequences were determined. The largest fragment contains three tRNA genes (tDNAs) coding for tRNAGlu, tRNALys, and tRNAVal, which has not previously been found in bacteria examined to date. The 580-bp amplicon contained tDNAIle and tDNAAla, whereas the 500-bp fragment had single tDNA coding either tRNAGlu or tRNAAla. Little variation, i.e., 0 to 0.4%, was found among V. cholerae O1 classical, O1 El Tor, and O139 epidemic strains. Slightly more variation was found against the non-O1/non-O139 serotypes (ca. 1% difference) and V. mimicus (2 to 3% difference). A pair of oligonucleotide primers were designed, based on the region differentiating all of V. cholerae strains from V. mimicus. The PCR system developed was subsequently evaluated by using representatives of V. cholerae from environmental and clinical sources, and of other taxa, including V. mimicus. This study provides the first molecular tool for identifying the species V. cholerae.

  7. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  8. Measurements and parameterization of neutron energy spectra from targets bombarded with 120 GeV protons

    NASA Astrophysics Data System (ADS)

    Kajimoto, T.; Shigyo, N.; Sanami, T.; Iwamoto, Y.; Hagiwara, M.; Lee, H. S.; Soha, A.; Ramberg, E.; Coleman, R.; Jensen, D.; Leveling, A.; Mokhov, N. V.; Boehnlein, D.; Vaziri, K.; Sakamoto, Y.; Ishibashi, K.; Nakashima, H.

    2014-10-01

    The energy spectra of neutrons were measured by a time-of-flight method for 120 GeV protons on thick graphite, aluminum, copper, and tungsten targets with an NE213 scintillator at the Fermilab Test Beam Facility. Neutron energy spectra were obtained between 25 and 3000 MeV at emission angles of 30°, 45°, 120°, and 150°. The spectra were parameterized as neutron emissions from three moving sources and then compared with theoretical spectra calculated by PHITS and FLUKA codes. The yields of the theoretical spectra were substantially underestimated compared with the yields of measured spectra. The integrated neutron yields from 25 to 3000 MeV calculated with PHITS code were 16-36% of the experimental yields and those calculated with FLUKA code were 26-57% of the experimental yields for all targets and emission angles.

  9. Frequency spectrum might act as communication code between retina and visual cortex I

    PubMed Central

    Yang, Xu; Gong, Bo; Lu, Jian-Wei

    2015-01-01

    AIM To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). METHODS Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. RESULTS The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. CONCLUSION The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1. PMID:26682156

  10. Frequency spectrum might act as communication code between retina and visual cortex I.

    PubMed

    Yang, Xu; Gong, Bo; Lu, Jian-Wei

    2015-01-01

    To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1.

  11. SATCOM antenna siting study on a P-3C using the NEC-BSC V3.1

    NASA Technical Reports Server (NTRS)

    Bensman, D.; Marhefka, R. J.

    1990-01-01

    The location of a UHF SATCOM antenna on a P-3C aircraft is studied using the NEC-Basic Scattering Code V3.1 (NEC-BSC3). The NEC-BSC3 is a computer code based on the uniform theory of diffraction. The code is first validated for this application using scale model measurements. In general, the comparisons are good except in 10 degree regions near the nose and tail of the aircraft. Patterns for various antenna locations are analyzed to achieve a prescripted performance.

  12. Emulation of the Active Immune Response in a Computer Network

    DTIC Science & Technology

    2009-01-15

    the Code Red worm propagated faster than the Melissa virus in 1999 and much faster than Morris’ worm in 1988. In the case of the Code Red worm, only...report to AFRL on contract #30602-01-0509, Binghamton NY, 2002, 2. Skormin, V.A., Delgado-Frias, J.G., McGee, D.L., Giordano , J.V., Popyack, L.J...V., Delgado-Frias J., McGee D., Giordano J., Popyack L.. Tarakanov A., "BASIS: A Biological Approach to System Information Security," ^2

  13. NEQAIRv14.0 Release Notes: Nonequilibrium and Equilibrium Radiative Transport Spectra Program

    NASA Technical Reports Server (NTRS)

    Brandis, Aaron Michael; Cruden, Brett A.

    2014-01-01

    NEQAIR v14.0 is the first parallelized version of NEQAIR. Starting from the last version of the code that went through the internal software release process at NASA Ames (NEQAIR 2008), there have been significant updates to the physics in the code and the computational efficiency. NEQAIR v14.0 supersedes NEQAIR v13.2, v13.1 and the suite of NEQAIR2009 versions. These updates have predominantly been performed by Brett Cruden and Aaron Brandis from ERC Inc at NASA Ames Research Center in 2013 and 2014. A new naming convention is being adopted with this current release. The current and future versions of the code will be named NEQAIR vY.X. The Y will refer to a major release increment. Minor revisions and update releases will involve incrementing X. This is to keep NEQAIR more in line with common software release practices. NEQAIR v14.0 is a standalone software tool for line-by-line spectral computation of radiative intensities and/or radiative heat flux, with one-dimensional transport of radiation. In order to accomplish this, NEQAIR v14.0, as in previous versions, requires the specification of distances (in cm), temperatures (in K) and number densities (in parts/cc) of constituent species along lines of sight. Therefore, it is assumed that flow quantities have been extracted from flow fields computed using other tools, such as CFD codes like DPLR or LAURA, and that lines of sight have been constructed and written out in the format required by NEQAIR v14.0. There are two principal modes for running NEQAIR v14.0. In the first mode NEQAIR v14.0 is used as a tool for creating synthetic spectra of any desired resolution (including convolution with a specified instrument/slit function). The first mode is typically exercised in simulating/interpreting spectroscopic measurements of different sources (e.g. shock tube data, plasma torches, etc.). In the second mode, NEQAIR v14.0 is used as a radiative heat flux prediction tool for flight projects. Correspondingly, NEQAIR has also been used to simulate the radiance measured on previous flight missions. This report summarizes the database updates, corrections that have been made to the code, changes to input files, parallelization, the current usage recommendations, including test cases, and an indication of the performance enhancements achieved.

  14. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  15. A systematic review of validated methods to capture stillbirth and spontaneous abortion using administrative or claims data.

    PubMed

    Likis, Frances E; Sathe, Nila A; Carnahan, Ryan; McPheeters, Melissa L

    2013-12-30

    To identify and assess diagnosis, procedure and pharmacy dispensing codes used to identify stillbirths and spontaneous abortion in administrative and claims databases from the United States or Canada. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to stillbirth or spontaneous abortion. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics and assessed each study's methodological rigor using a pre-defined approach. Ten publications addressing stillbirth and four addressing spontaneous abortion met our inclusion criteria. The International Classification of Diseases, Ninth Revision (ICD-9) codes most commonly used in algorithms for stillbirth were those for intrauterine death (656.4) and stillborn outcomes of delivery (V27.1, V27.3-V27.4, and V27.6-V27.7). Papers identifying spontaneous abortion used codes for missed abortion and spontaneous abortion: 632, 634.x, as well as V27.0-V27.7. Only two studies identifying stillbirth reported validation of algorithms. The overall positive predictive value of the algorithms was high (99%-100%), and one study reported an algorithm with 86% sensitivity. However, the predictive value of individual codes was not assessed and study populations were limited to specific geographic areas. Additional validation studies with a nationally representative sample are needed to confirm the optimal algorithm to identify stillbirths or spontaneous abortion in administrative and claims databases.' Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Opinion survey on proposals for improving code stroke in Murcia Health District V, 2014.

    PubMed

    González-Navarro, M; Martínez-Sánchez, M A; Morales-Camacho, V; Valera-Albert, M; Atienza-Ayala, S V; Limiñana-Alcaraz, G

    2017-05-01

    Stroke is a time-dependent neurological disease. Health District V in the Murcia Health System has certain demographic and geographical characteristics that make it necessary to create specific improvement strategies to ensure proper functioning of code stroke (CS). The study objectives were to assess local professionals' opinions about code stroke activation and procedure, and to share these suggestions with the regional multidisciplinary group for code stroke. This cross-sectional and descriptive study used the Delphi technique to develop a questionnaire for doctors and nurses working at all care levels in Area V. An anonymous electronic survey was sent to 154 professionals. The analysis was performed using the SWOT method (Strengths, Weaknesses, Opportunities, and Threats). Researchers collected 51 questionnaires. The main proposals were providing training, promoting communication with the neurologist, overcoming physical distances, using diagnostic imaging tests, motivating professionals, and raising awareness in the general population. Most of the interventions proposed by the participants have been listed in published literature. These improvement proposals were forwarded to the Regional Code Stroke Improvement Group. Copyright © 2015 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. A velocity-dependent anomalous radial transport model for (2-D, 2-V) kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, Kowsik; Krasheninnikov, Sergei; Cohen, Ron; Rognlien, Tom

    2008-11-01

    Plasma turbulence constitutes a significant part of radial plasma transport in magnetically confined plasmas. This turbulent transport is modeled in the form of anomalous convection and diffusion coefficients in fluid transport codes. There is a need to model the same in continuum kinetic edge codes [such as the (2-D, 2-V) transport version of TEMPEST, NEO, and the code being developed by the Edge Simulation Laboratory] with non-Maxwellian distributions. We present an anomalous transport model with velocity-dependent convection and diffusion coefficients leading to a diagonal transport matrix similar to that used in contemporary fluid transport models (e.g., UEDGE). Also presented are results of simulations corresponding to radial transport due to long-wavelength ExB turbulence using a velocity-independent diffusion coefficient. A BGK collision model is used to enable comparison with fluid transport codes.

  18. Trilateral Bridge Rating Criteria.

    DTIC Science & Technology

    1982-04-01

    and vehicle impact factor (i.e., V= VDES , G - G S , D- dm , I - imS). We take the Mud load as 0.8 times the design values (i.e., M - 0.8 ?S) and set...Artur D Ittek ln TABLE 4.1 SUMMARY OF VALUES USED IN NORMAL CROSSING Symbol Meaning Value VDES Design code vehicle weight 60 tons 9DES Design code gap...a caution crossing we first parametrically increase in vehicle weight or gap size, by V = (1+pv) VDES (5.1a) G = (1+pg) 9DES (5.1b) where 1 V is the

  19. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less

  20. Improvements to the nuclear model code GNASH for cross section calculations at higher energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, P.G.; Chadwick, M.B.

    1994-05-01

    The nuclear model code GNASH, which in the past has been used predominantly for incident particle energies below 20 MeV, has been modified extensively for calculations at higher energies. The model extensions and improvements are described in this paper, and their significance is illustrated by comparing calculations with experimental data for incident energies up to 160 MeV.

  1. IUTAM Symposium on Statistical Energy Analysis, 8-11 July 1997, Programme

    DTIC Science & Technology

    1997-01-01

    distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum200 words) This was the first international scientific gathering devoted...energy flow, continuum dynamics, vibrational energy, statistical energy analysis (SEA) 15. NUMBER OF PAGES 16. PRICE CODE INSECURITY... correlation v=V(ɘ ’• • determination of the correlation n^, =11^, (<?). When harmonic motion and time-average are considered, the following I

  2. Effects of Debris Entrainment and Multi-Phase Flow on Plug Loading in an MX Trench.

    DTIC Science & Technology

    1978-09-15

    gas stream of density (pg) and velocity (Vg) is: -., * -) - * 2~ TD FD Pg (V P V) Vp-Vg I CD( TD ) (A.1) 4 where the drag coefficient (CD) is defined by...ATTN: FCPR ATTN: Code L53 , J. Forrest Field Command Naval Facilities Engineering Command Defense Nuclear Agency ATTN: Code 09M22C Livermore Division

  3. Codes over infinite family of rings : Equivalence and invariant ring

    NASA Astrophysics Data System (ADS)

    Irwansyah, Muchtadi-Alamsyah, Intan; Muchlis, Ahmad; Barra, Aleams; Suprijanto, Djoko

    2016-02-01

    In this paper, we study codes over the ring Bk=𝔽pr[v1,…,vk]/(vi2=vi,∀i =1 ,…,k ) . For instance, we focus on two topics, i.e. characterization of the equivalent condition between two codes over Bk using a Gray map into codes over finite field 𝔽pr, and finding generators for invariant ring of Hamming weight enumerator for Euclidean self-dual codes over Bk.

  4. Independent Validation and Verification of automated information systems in the Department of Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunteman, W.J.; Caldwell, R.

    1994-07-01

    The Department of Energy (DOE) has established an Independent Validation and Verification (IV&V) program for all classified automated information systems (AIS) operating in compartmented or multi-level modes. The IV&V program was established in DOE Order 5639.6A and described in the manual associated with the Order. This paper describes the DOE IV&V program, the IV&V process and activities, the expected benefits from an IV&V, and the criteria and methodologies used during an IV&V. The first IV&V under this program was conducted on the Integrated Computing Network (ICN) at Los Alamos National Laboratory and several lessons learned are presented. The DOE IV&Vmore » program is based on the following definitions. An IV&V is defined as the use of expertise from outside an AIS organization to conduct validation and verification studies on a classified AIS. Validation is defined as the process of applying the specialized security test and evaluation procedures, tools, and equipment needed to establish acceptance for joint usage of an AIS by one or more departments or agencies and their contractors. Verification is the process of comparing two levels of an AIS specification for proper correspondence (e.g., security policy model with top-level specifications, top-level specifications with source code, or source code with object code).« less

  5. Measurement of Thick Target Neutron Yields at 0-Degree Bombarded With 140-MeV, 250-MeV And 350-MeV Protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwamoto, Yosuke; /JAERI, Kyoto; Taniguchi, Shingo

    Neutron energy spectra at 0{sup o} produced from stopping-length graphite, aluminum, iron and lead targets bombarded with 140, 250 and 350 MeV protons were measured at the neutron TOF course in RCNP of Osaka University. The neutron energy spectra were obtained by using the time-of-flight technique in the energy range from 10 MeV to incident proton energy. To compare the experimental results, Monte Carlo calculations with the PHITS and MCNPX codes were performed using the JENDL-HE and the LA150 evaluated nuclear data files, the ISOBAR model implemented in PHITS, and the LAHET code in MCNPX. It was found that thesemore » calculated results at 0{sup o} generally agreed with the experimental results in the energy range above 20 MeV except for graphite at 250 and 350 MeV.« less

  6. 48 CFR 52.204-7 - System for Award Management.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... for Award Manangement (JUL 2013) (a) Definitions. As used in this provision— Data Universal Numbering... information, including the DUNS number or the DUNS+4 number, the Contractor and Government Entity (CAGE) code... Zip Code. (iv) Company Mailing Address, City, State and Zip Code (if separate from physical). (v...

  7. 48 CFR 52.204-7 - System for Award Management.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... for Award Manangement (JUL 2013) (a) Definitions. As used in this provision— Data Universal Numbering... information, including the DUNS number or the DUNS+4 number, the Contractor and Government Entity (CAGE) code... Zip Code. (iv) Company Mailing Address, City, State and Zip Code (if separate from physical). (v...

  8. Role of Amines in Adhesion of Polybutadiene to Glass Substrates. II. Reactions of Amines with Triethylsilanol and/or Fumed Silica.

    DTIC Science & Technology

    1982-10-01

    Schmidt I Dr. F. Roberto 1 Assistant Secretary of the Navy Code AFRPL MKPA (RE, and S) Room SE 731 Edwards AFE, CA 93523 Pentagon Washington, D.C...Force Office of’Sctentific Office of Naval Research Research odie 43 vDirectorate of Chemical Sciences CodeA l V 22217 " Bolling Air Force Base Arlington

  9. Validation and Intercomparison Studies Within GODAE

    DTIC Science & Technology

    2009-09-01

    unlimited. 13. SUPPLEMENTARY NOTES 20091228154 14. ABSTRACT During the Global Ocean Data Assimilation Experiment (GODAE), seven international... global -ocean and basin-scale forecasting systems of different countries in routine interaction and continuous operation, (2) to assess the quality and... Franchi , 7000 Public Affairs (Unclassified/ Unlimited Only), Code 7o30 4 Division, Code ^VtcV Vs-Jc \\ -Vi<-’/c ••>’ 3^v’.-:5, w. 3Uo|eri 1

  10. Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions

    DTIC Science & Technology

    2012-07-01

    Capability Maturity Model IntegrationSM (CMMI®) [Davis 2009]. SM Team Software Process, TSP, and Capability Maturity Model Integration are service...STP Software Test Plan TEP Test and Evaluation Plan TSP Team Software Process V & V verification and validation CMU/SEI-2012-TN-016 | 47...Supporting the Use of CERT® Secure Coding Standards in DoD Acquisitions Tim Morrow ( Software Engineering Institute) Robert Seacord ( Software

  11. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, P. T.; Dickson, T. L.; Yin, S.

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include themore » NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.« less

  12. Implicit SPH v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungjoo; Parks, Michael L.; Perego, Mauro

    2016-11-09

    ISPH code is developed to solve multi-physics meso-scale flow problems using implicit SPH method. In particular, the code can provides solutions for incompressible, multi phase flow and electro-kinetic flows.

  13. Object-oriented code SUR for plasma kinetic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levchenko, V.D.; Sigov, Y.S.

    1995-12-31

    We have developed a self-consistent simulation code based on object-oriented model of plasma (OOMP) for solving the Vlasov/Poisson (V/P), Vlasov/Maxwell (V/M), Bhatnagar-Gross-Krook (BGK) as well as Fokker-Planck (FP) kinetic equations. The application of an object-oriented approach (OOA) to simulation of plasmas and plasma-like media by means of splitting methods permits to uniformly describe and solve the wide circle of plasma kinetics problems, including those being very complicated: many-dimensional, relativistic, with regard for collisions, specific boundary conditions etc. This paper gives the brief description of possibilities of the SUR code, as a concrete realization of OOMP.

  14. Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole

    2017-03-01

    The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.

  15. Computer Code Aids Design Of Wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1993-01-01

    AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.

  16. CFL3D Version 6.4-General Usage and Aeroelastic Analysis

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Rumsey, Christopher L.; Biedron, Robert T.

    2006-01-01

    This document contains the course notes on the computational fluid dynamics code CFL3D version 6.4. It is intended to provide from basic to advanced users the information necessary to successfully use the code for a broad range of cases. Much of the course covers capability that has been a part of previous versions of the code, with material compiled from a CFL3D v5.0 manual and from the CFL3D v6 web site prior to the current release. This part of the material is presented to users of the code not familiar with computational fluid dynamics. There is new capability in CFL3D version 6.4 presented here that has not previously been published. There are also outdated features no longer used or recommended in recent releases of the code. The information offered here supersedes earlier manuals and updates outdated usage. Where current usage supersedes older versions, notation of that is made. These course notes also provides hints for usage, code installation and examples not found elsewhere.

  17. Full core analysis of IRIS reactor by using MCNPX.

    PubMed

    Amin, E A; Bashter, I I; Hassan, Nabil M; Mustafa, S S

    2016-07-01

    This paper describes neutronic analysis for fresh fuelled IRIS (International Reactor Innovative and Secure) reactor by MCNPX code. The analysis included criticality calculations, radial power and axial power distribution, nuclear peaking factor and axial offset percent at the beginning of fuel cycle. The effective multiplication factor obtained by MCNPX code is compared with previous calculations by HELIOS/NESTLE, CASMO/SIMULATE, modified CORD-2 nodal calculations and SAS2H/KENO-V code systems. It is found that k-eff value obtained by MCNPX is closer to CORD-2 value. The radial and axial powers are compared with other published results carried out using SAS2H/KENO-V code. Moreover, the WIMS-D5 code is used for studying the effect of enriched boron in form of ZrB2 on the effective multiplication factor (K-eff) of the fuel pin. In this part of calculation, K-eff is calculated at different concentrations of Boron-10 in mg/cm at different stages of burnup of unit cell. The results of this part are compared with published results performed by HELIOS code. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. W-026, Waste Receiving and Processing Facility data management system validation and verification report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, M.E.

    1997-12-05

    This V and V Report includes analysis of two revisions of the DMS [data management system] System Requirements Specification (SRS) and the Preliminary System Design Document (PSDD); the source code for the DMS Communication Module (DMSCOM) messages; the source code for selected DMS Screens, and the code for the BWAS Simulator. BDM Federal analysts used a series of matrices to: compare the requirements in the System Requirements Specification (SRS) to the specifications found in the System Design Document (SDD), to ensure the design supports the business functions, compare the discreet parts of the SDD with each other, to ensure thatmore » the design is consistent and cohesive, compare the source code of the DMS Communication Module with the specifications, to ensure that the resultant messages will support the design, compare the source code of selected screens to the specifications to ensure that resultant system screens will support the design, compare the source code of the BWAS simulator with the requirements to interface with DMS messages and data transfers relating to the BWAS operations.« less

  19. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    NASA Astrophysics Data System (ADS)

    Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul

    2017-05-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.

  20. Matlab based Toolkits used to Interface with Optical Design Software for NASA's James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Howard, Joseph

    2007-01-01

    The viewgraph presentation provides an introduction to the James Webb Space Telescope (JWST). The first part provides a brief overview of Matlab toolkits including CodeV, OSLO, and Zemax Toolkits. The toolkit overview examines purpose, layout, how Matlab gets data from CodeV, function layout, and using cvHELP. The second part provides examples of use with JWST, including wavefront sensitivities and alignment simulations.

  1. Sandia Engineering Analysis Code Access System v. 2.0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjaardema, Gregory D.

    The Sandia Engineering Analysis Code Access System (SEACAS) is a suite of preprocessing, post processing, translation, visualization, and utility applications supporting finite element analysis software using the Exodus database file format.

  2. [The significance of quality of life from a socio-legal perspective].

    PubMed

    Axer, Peter

    2014-01-01

    Only rarely is the term quality of life explicitly mentioned in the Social Security Code (Sozialgesetzbuch, SGB). In the statutory health insurance law (Book V of the Social Security Code, SGB V), the term is explicitly regulated within the context of the entitlement to pharmaceuticals. While there are pharmaceuticals that have the priority to increase the quality of life but are excluded from the provision of healthcare (Section 34 (1) Sentence 7 SGB V), the improvement of the quality of life has to be taken into account for the cost-benefit assessment (Section 35b SGB V) as well as for the early pharmaceutical benefit assessment (Section 35a SGB V) and for the formation of reference price groups (Section 35 SGB V) for and in the case of an entitlement to benefits in the event of illness. Copyright © 2014. Published by Elsevier GmbH.

  3. A hydrodynamic approach to cosmology - Methodology

    NASA Technical Reports Server (NTRS)

    Cen, Renyue

    1992-01-01

    The present study describes an accurate and efficient hydrodynamic code for evolving self-gravitating cosmological systems. The hydrodynamic code is a flux-based mesh code originally designed for engineering hydrodynamical applications. A variety of checks were performed which indicate that the resolution of the code is a few cells, providing accuracy for integral energy quantities in the present simulations of 1-3 percent over the whole runs. Six species (H I, H II, He I, He II, He III) are tracked separately, and relevant ionization and recombination processes, as well as line and continuum heating and cooling, are computed. The background radiation field is simultaneously determined in the range 1 eV to 100 keV, allowing for absorption, emission, and cosmological effects. It is shown how the inevitable numerical inaccuracies can be estimated and to some extent overcome.

  4. Neural coding of image structure and contrast polarity of Cartesian, hyperbolic, and polar gratings in the primary and secondary visual cortex of the tree shrew.

    PubMed

    Poirot, Jordan; De Luna, Paolo; Rainer, Gregor

    2016-04-01

    We comprehensively characterize spiking and visual evoked potential (VEP) activity in tree shrew V1 and V2 using Cartesian, hyperbolic, and polar gratings. Neural selectivity to structure of Cartesian gratings was higher than other grating classes in both visual areas. From V1 to V2, structure selectivity of spiking activity increased, whereas corresponding VEP values tended to decrease, suggesting that single-neuron coding of Cartesian grating attributes improved while the cortical columnar organization of these neurons became less precise from V1 to V2. We observed that neurons in V2 generally exhibited similar selectivity for polar and Cartesian gratings, suggesting that structure of polar-like stimuli might be encoded as early as in V2. This hypothesis is supported by the preference shift from V1 to V2 toward polar gratings of higher spatial frequency, consistent with the notion that V2 neurons encode visual scene borders and contours. Neural sensitivity to modulations of polarity of hyperbolic gratings was highest among all grating classes and closely related to the visual receptive field (RF) organization of ON- and OFF-dominated subregions. We show that spatial RF reconstructions depend strongly on grating class, suggesting that intracortical contributions to RF structure are strongest for Cartesian and polar gratings. Hyperbolic gratings tend to recruit least cortical elaboration such that the RF maps are similar to those generated by sparse noise, which most closely approximate feedforward inputs. Our findings complement previous literature in primates, rodents, and carnivores and highlight novel aspects of shape representation and coding occurring in mammalian early visual cortex. Copyright © 2016 the American Physiological Society.

  5. Experimental check of bremsstrahlung dosimetry predictions for 0.75 MeV electrons

    NASA Astrophysics Data System (ADS)

    Sanford, T. W. L.; Halbleib, J. A.; Beezhold, W.

    Bremsstrahlung dose in CaF2 TLDs from the radiation produced by 0.75 MeV electrons incident on Ta/C targets is measured and compared with that calculated via the CYLTRAN Monte Carlo code. The comparison was made to validate the code, which is used to predict and analyze radiation environments of flash X-ray simulators measured by TLDs. Over a wide range of Ta target thicknesses and radiation angles the code is found to agree with the 5% measurements. For Ta thickness near those that optimize the radiation output, however, the code overestimates the radiation dose at small angles. Maximum overprediction is about 14 + or - 5%. The general agreement, nonetheless, gives confidence in using the code at this energy and in the TLD calibration procedure. For the bulk of the measurements, a standard TLD employing a 2.2 mm thick Al equilibrator was used. In this paper we also show that this thickness can significantly attenuate the free-field dose and introduces significant photon buildup in the equalibrator.

  6. Modification of codes NUALGAM and BREMRAD, Volume 1

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Huang, R.; Firstenberg, H.

    1971-01-01

    The NUGAM2 code predicts forward and backward angular energy differential and integrated distributions for gamma photons and fluorescent radiation emerging from finite laminar transport media. It determines buildup and albedo data for scientific research and engineering purposes; it also predicts the emission characteristics of finite radioisotope sources. The results are shown to be in very good agreement with available published data. The code predicts data for many situations in which no published data is available in the energy range up to 5 MeV. The NUGAM3 code predicts the pulse height response of inorganic (NaI and CsI) scintillation detectors to gamma photons. Because it allows the scintillator to be clad and mounted on a photomultiplier as in the experimental or industrial application, it is a more practical and thus useful code than others previously reported. Results are in excellent agreement with published Monte Carlo and experimental data in the energy range up to 4.5 MeV.

  7. A combined Compton and coded-aperture telescope for medium-energy gamma-ray astrophysics

    NASA Astrophysics Data System (ADS)

    Galloway, Michelle; Zoglauer, Andreas; Boggs, Steven E.; Amman, Mark

    2018-06-01

    A future mission in medium-energy gamma-ray astrophysics would allow for many scientific advancements, such as a possible explanation for the excess positron emission from the Galactic center, a better understanding of nucleosynthesis and explosion mechanisms in Type Ia supernovae, and a look at the physical forces at play in compact objects such as black holes and neutron stars. Additionally, further observation in this energy regime would significantly extend the search parameter space for low-mass dark matter. In order to achieve these objectives, an instrument with good energy resolution, good angular resolution, and high sensitivity is required. In this paper we present the design and simulation of a Compton telescope consisting of cubic-centimeter cadmium zinc telluride detectors as absorbers behind a silicon tracker with the addition of a passive coded mask. The goal of the design was to create a very sensitive instrument that is capable of high angular resolution. The simulated telescope achieved energy resolutions of 1.68% FWHM at 511 keV and 1.11% at 1809 keV, on-axis angular resolutions in Compton mode of 2.63° FWHM at 511 keV and 1.30° FWHM at 1809 keV, and is capable of resolving sources to at least 0.2° at lower energies with the use of the coded mask. An initial assessment of the instrument in Compton-imaging mode yields an effective area of 183 cm2 at 511 keV and an anticipated all-sky sensitivity of 3.6 × 10-6 photons cm-2 s-1 for a broadened 511 keV source over a two-year observation time. Additionally, combining a coded mask with a Compton imager to improve point-source localization for positron detection has been demonstrated.

  8. Monte Carlo dose calculations in homogeneous media and at interfaces: a comparison between GEPTS, EGSnrc, MCNP, and measurements.

    PubMed

    Chibani, Omar; Li, X Allen

    2002-05-01

    Three Monte Carlo photon/electron transport codes (GEPTS, EGSnrc, and MCNP) are bench-marked against dose measurements in homogeneous (both low- and high-Z) media as well as at interfaces. A brief overview on physical models used by each code for photon and electron (positron) transport is given. Absolute calorimetric dose measurements for 0.5 and 1 MeV electron beams incident on homogeneous and multilayer media are compared with the predictions of the three codes. Comparison with dose measurements in two-layer media exposed to a 60Co gamma source is also performed. In addition, comparisons between the codes (including the EGS4 code) are done for (a) 0.05 to 10 MeV electron beams and positron point sources in lead, (b) high-energy photons (10 and 20 MeV) irradiating a multilayer phantom (water/steel/air), and (c) simulation of a 90Sr/90Y brachytherapy source. A good agreement is observed between the calorimetric electron dose measurements and predictions of GEPTS and EGSnrc in both homogeneous and multilayer media. MCNP outputs are found to be dependent on the energy-indexing method (Default/ITS style). This dependence is significant in homogeneous media as well as at interfaces. MCNP(ITS) fits more closely the experimental data than MCNP(DEF), except for the case of Be. At low energy (0.05 and 0.1 MeV), MCNP(ITS) dose distributions in lead show higher maximums in comparison with GEPTS and EGSnrc. EGS4 produces too penetrating electron-dose distributions in high-Z media, especially at low energy (<0.1 MeV). For positrons, differences between GEPTS and EGSnrc are observed in lead because GEPTS distinguishes positrons from electrons for both elastic multiple scattering and bremsstrahlung emission models. For the 60Co source, a quite good agreement between calculations and measurements is observed with regards to the experimental uncertainty. For the other cases (10 and 20 MeV photon sources and the 90Sr/90Y beta source), a good agreement is found between the three codes. In conclusion, differences between GEPTS and EGSnrc results are found to be very small for almost all media and energies studied. MCNP results depend significantly on the electron energy-indexing method.

  9. Yunnan-III models for evolutionary population synthesis

    NASA Astrophysics Data System (ADS)

    Zhang, F.; Li, L.; Han, Z.; Zhuang, Y.; Kang, X.

    2013-02-01

    We build the Yunnan-III evolutionary population synthesis (EPS) models by using the mesa stellar evolution code, BaSeL stellar spectra library and the initial mass functions (IMFs) of Kroupa and Salpeter, and present colours and integrated spectral energy distributions (ISEDs) of solar-metallicity stellar populations (SPs) in the range of 1 Myr to 15 Gyr. The main characteristic of the Yunnan-III EPS models is the usage of a set of self-consistent solar-metallicity stellar evolutionary tracks (the masses of stars are from 0.1 to 100 M⊙). This set of tracks is obtained by using the state-of-the-art mesa code. mesa code can evolve stellar models through thermally pulsing asymptotic giant branch (TP-AGB) phase for low- and intermediate-mass stars. By comparisons, we confirm that the inclusion of TP-AGB stars makes the V - K, V - J and V - R colours of SPs redder and the infrared flux larger at ages log(t/yr) ≳ 7.6 [the differences reach the maximum at log(t/yr) ˜ 8.6, ˜0.5-0.2 mag for colours, approximately two times for K-band flux]. We also find that the colour-evolution trends of Model with-TPAGB at intermediate and large ages are similar to those from the starburst99 code, which employs the Padova-AGB stellar library, BaSeL spectral library and the Kroupa IMF. At last, we compare the colours with the other EPS models comprising TP-AGB stars (such as CB07, M05, V10 and POPSTAR), and find that the B - V colour agrees with each other but the V-K colour shows a larger discrepancy among these EPS models [˜1 mag when 8 ≲ log(t/yr) ≲ 9]. The stellar evolutionary tracks, isochrones, colours and ISEDs can be obtained on request from the first author or from our website (http://www1.ynao.ac.cn/~zhangfh/). Using the isochrones, you can build your EPS models. Now the format of stellar evolutionary tracks is the same as that in the starburst99 code; you can put them into the starburst99 code and get the SP's results. Moreover, the colours involving other passbands or on other systems (e.g. HST F439W - F555W colour on AB system) can also be obtained on request.

  10. CEM2k and LAQGSM Codes as Event-Generators for Space Radiation Shield and Cosmic Rays Propagation Applications

    NASA Technical Reports Server (NTRS)

    Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.

    2002-01-01

    Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.

  11. The Cadmium Zinc Telluride Imager on AstroSat

    NASA Astrophysics Data System (ADS)

    Bhalerao, V.; Bhattacharya, D.; Vibhute, A.; Pawar, P.; Rao, A. R.; Hingar, M. K.; Khanna, Rakesh; Kutty, A. P. K.; Malkar, J. P.; Patil, M. H.; Arora, Y. K.; Sinha, S.; Priya, P.; Samuel, Essy; Sreekumar, S.; Vinod, P.; Mithun, N. P. S.; Vadawale, S. V.; Vagshette, N.; Navalgund, K. H.; Sarma, K. S.; Pandiyan, R.; Seetha, S.; Subbarao, K.

    2017-06-01

    The Cadmium Zinc Telluride Imager (CZTI) is a high energy, wide-field imaging instrument on AstroSat. CZTI's namesake Cadmium Zinc Telluride detectors cover an energy range from 20 keV to >200 keV, with 11% energy resolution at 60 keV. The coded aperture mask attains an angular resolution of 17^' over a 4.6° × 4.6° (FWHM) field-of-view. CZTI functions as an open detector above 100 keV, continuously sensitive to GRBs and other transients in about 30% of the sky. The pixellated detectors are sensitive to polarization above ˜ 100 keV, with exciting possibilities for polarization studies of transients and bright persistent sources. In this paper, we provide details of the complete CZTI instrument, detectors, coded aperture mask, mechanical and electronic configuration, as well as data and products.

  12. TACOM LCMC IB and DMSMS Mitigation

    DTIC Science & Technology

    2011-09-26

    Sources I Gosed II Opened ~ I AAC flag: Vii6d AI CAGE codes (CONUS): 3 3 CAGE codes (OCONUS): 0 0 ---- Total: 3 3 Single or no CAGE code...v In box - I’lL- I Qi) chambers:... I ~ Microsoft - I I~ AADO SER.- t@) ~ i_ .... gose I I Used On Reference/Part Numbers I~ 26SEP11

  13. Spatial Correlations in Natural Scenes Modulate Response Reliability in Mouse Visual Cortex

    PubMed Central

    Rikhye, Rajeev V.

    2015-01-01

    Intrinsic neuronal variability significantly limits information encoding in the primary visual cortex (V1). Certain stimuli can suppress this intertrial variability to increase the reliability of neuronal responses. In particular, responses to natural scenes, which have broadband spatiotemporal statistics, are more reliable than responses to stimuli such as gratings. However, very little is known about which stimulus statistics modulate reliable coding and how this occurs at the neural ensemble level. Here, we sought to elucidate the role that spatial correlations in natural scenes play in reliable coding. We developed a novel noise-masking method to systematically alter spatial correlations in natural movies, without altering their edge structure. Using high-speed two-photon calcium imaging in vivo, we found that responses in mouse V1 were much less reliable at both the single neuron and population level when spatial correlations were removed from the image. This change in reliability was due to a reorganization of between-neuron correlations. Strongly correlated neurons formed ensembles that reliably and accurately encoded visual stimuli, whereas reducing spatial correlations reduced the activation of these ensembles, leading to an unreliable code. Together with an ensemble-specific normalization model, these results suggest that the coordinated activation of specific subsets of neurons underlies the reliable coding of natural scenes. SIGNIFICANCE STATEMENT The natural environment is rich with information. To process this information with high fidelity, V1 neurons have to be robust to noise and, consequentially, must generate responses that are reliable from trial to trial. While several studies have hinted that both stimulus attributes and population coding may reduce noise, the details remain unclear. Specifically, what features of natural scenes are important and how do they modulate reliability? This study is the first to investigate the role of spatial correlations, which are a fundamental attribute of natural scenes, in shaping stimulus coding by V1 neurons. Our results provide new insights into how stimulus spatial correlations reorganize the correlated activation of specific ensembles of neurons to ensure accurate information processing in V1. PMID:26511254

  14. Observer Based Compensators for Nonlinear Systems

    DTIC Science & Technology

    1989-03-31

    T’niversitV of California I_ ______ Air Force Office Of Scientific RPesprc~i 6c. ADDI ESS ( C ..ty. State, ania ZIP Code) 7b. ADDRESS (City, State, and Z...Mathematics, M. Luksik, C . Martin and W. Shadwick, eds. Contempary Mathematics V.68, American Mathematical Society, Providence, 157-189. 2. 1987 Krener, A...ibution I University of California C AvJility Codes Avail ,111d I of Davis, CA 95616 sai. spvc-d TABLE OF CONTENTS A bstract

  15. When Terminal Illness Is Worse Than Death: A Multicenter Study of Health-Care Providers' Resuscitation Desires.

    PubMed

    Chavez, Luis O; Einav, Sharon; Varon, Joseph

    2017-11-01

    To investigate how a terminal illness may affect the health-care providers' resuscitation preferences. We conducted a cross-sectional survey in 9 health-care institutions located in 4 geographical regions in North and Central America, investigating attitudes toward end-of-life practices in health-care providers. Statistical analysis included descriptive statistics and χ 2 test for the presence of associations ( P < 0.05 being significant) and Cramer V for the strength of the association. The main outcome measured the correlation between the respondents' present code status and their preference for cardiopulmonary resuscitation (CPR) in case of terminal illness. A total of 852 surveys were completed. Among the respondents, 21% (n = 180) were physicians, 36.9% (n = 317) were nurses, 10.5% (n = 90) were medical students, and 265 participants were other staff members of the institutions. Most respondents (58.3%; n = 500) desired "definitely full code" (physicians 73.2%; n = 131), only 13.8% of the respondents (physicians 8.33%; n = 15) desired "definitely no code" or "partial support," and 20.9% of the respondents (n = 179; among physicians 18.4%; n = 33) had never considered their code status. There was an association between current code status and resuscitation preference in case of terminal illness ( P < .001), but this association was overall quite weak (Cramer V = 0.180). Subgroup analysis revealed no association between current code status and terminal illness code preference among physicians ( P = .290) and nurses ( P = .316), whereupon other hospital workers were more consistent ( P < .01, Cramer V = .291). Doctors and nurses have different end-of-life preferences than other hospital workers. Their desire to undergo CPR may change when facing a terminal illness.

  16. Seismic assessment of Technical Area V (TA-V).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medrano, Carlos S.

    The Technical Area V (TA-V) Seismic Assessment Report was commissioned as part of Sandia National Laboratories (SNL) Self Assessment Requirement per DOE O 414.1, Quality Assurance, for seismic impact on existing facilities at Technical Area-V (TA-V). SNL TA-V facilities are located on an existing Uniform Building Code (UBC) Seismic Zone IIB Site within the physical boundary of the Kirtland Air Force Base (KAFB). The document delineates a summary of the existing facilities with their safety-significant structure, system and components, identifies DOE Guidance, conceptual framework, past assessments and the present Geological and Seismic conditions. Building upon the past information and themore » evolution of the new seismic design criteria, the document discusses the potential impact of the new standards and provides recommendations based upon the current International Building Code (IBC) per DOE O 420.1B, Facility Safety and DOE G 420.1-2, Guide for the Mitigation of Natural Phenomena Hazards for DOE Nuclear Facilities and Non-Nuclear Facilities.« less

  17. Measurement of Compression Factor and Error Sensitivity Factor of Facsimile Coding Techniques Submitted to the CCITT By Great Britain and the Federal Republic of Germany

    DTIC Science & Technology

    1979-10-01

    D CD I- l M C’ m (’-01 V-~ "r c ml C- 00 0 v ~ ~ v V. v vCvq; o LO tol vo oo Co Lo 0 tD CI C CO U) 0 -4 l m1C’ .4oC cn z m’N t’l - ccC l m Z o .- 4C t...00013 IrTA 11 F j~)f~, I)~ l53 t.l .. AsOOM lf DATA CJ-3.(to S.1)sCODE(29 591)vCO0E(3s 5o11/ 49 b#ZOOOB/ 00013 DATA COJE(A. 69l)oC30E(29 691)*CCDE-(39...nn~i-,, ~.2 2-LCJM - 1 A, A -AI .DREfr Td ~ 92It 61 - ff~ *CTABLEW(16).CSTART(16).STBUF(1728JSrRUN(1728) 00004i C0MMOC4.E4iAY /ERROR S( 25001 00004

  18. 17 CFR Table V to Subpart E of... - Civil Monetary Penalty Inflation Adjustments

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Inflation Adjustments V Table V to Subpart E of Part 201 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION RULES OF PRACTICE Adjustment of Civil Monetary Penalties Pt. 201, Subpt. E, Table V Table V to Subpart E of Part 201—Civil Monetary Penalty Inflation Adjustments U.S. Code citation Civil...

  19. 17 CFR Table V to Subpart E of... - Civil Monetary Penalty Inflation Adjustments

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Inflation Adjustments V Table V to Subpart E of Part 201 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION RULES OF PRACTICE Adjustment of Civil Monetary Penalties Pt. 201, Subpt. E, Table V Table V to Subpart E of Part 201—Civil Monetary Penalty Inflation Adjustments U.S. Code citation Civil...

  20. Three-dimensional simulation of triode-type MIG for 1 MW, 120 GHz gyrotron for ECRH applications

    NASA Astrophysics Data System (ADS)

    Singh, Udaybir; Kumar, Nitin; Kumar, Narendra; Kumar, Anil; Sinha, A. K.

    2012-01-01

    In this paper, the three-dimensional simulation of triode-type magnetron injection gun (MIG) for 120 GHz, 1 MW gyrotron is presented. The operating voltages of the modulating anode and the accelerating anode are 57 kV and 80 kV respectively. The high order TE 22,6 mode is selected as the operating mode and the electron beam is launched at the first radial maxima for the fundamental beam-mode operation. The initial design is obtained by using the in-house developed code MIGSYN. The numerical simulation is performed by using the commercially available code CST-Particle Studio (PS). The simulated results of MIG obtained by using CST-PS are validated with other simulation codes EGUN and TRAK, respectively. The results on the design output parameters obtained by using these three codes are found to be in close agreement.

  1. Sensitivity analysis of the Gupta and Park chemical models on the heat flux by DSMC and CFD codes

    NASA Astrophysics Data System (ADS)

    Morsa, Luigi; Festa, Giandomenico; Zuppardi, Gennaro

    2012-11-01

    The present study is the logical continuation of a former paper by the first author in which the influence of the chemical models by Gupta and by Park on the computation of heat flux on the Orion and EXPERT capsules was evaluated. Tests were carried out by the direct simulation Monte Carlo code DS2V and by the computational fluiddynamic (CFD) code H3NS. DS2V implements the Gupta model, while H3NS implements the Park model. In order to compare the effects of the chemical models, the Park model was implemented also in DS2V. The results showed that DS2V and H3NS compute a different composition both in the flow field and on the surface, even using the same chemical model (Park). Furthermore DS2V computes, by the two chemical models, different compositions in the flow field but the same composition on the surface, therefore the same heat flux. In the present study, in order to evaluate the influence of these chemical models also in a CFD code, the Gupta and the Park models have been implemented in FLUENT. Tests by DS2V and by FLUENT, have been carried out for the EXPERT capsule at the altitude of 70 km and with velocity of 5000 m/s. The capsule experiences a hypersonic, continuum low density regime. Due to the energy level of the flow, the vibration equation, lacking in the original version of FLUENT, has been implemented. The results of the heat flux computation verify that FLUENT is quite sensitive to the Gupta and to the Park chemical models. In fact, at the stagnation point, the percentage difference between the models is about 13%. On the opposite the DS2V results by the two models are practically equivalent.

  2. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  3. 75 FR 47045 - Self-Regulatory Organizations; Notice of Filing of Proposed Rule Change by NYSE Arca, Inc...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-04

    .... This Rule specifically requires the adoption of a code of ethics by an investment advisor to include... requiring supervised persons to report any violations of the code of ethics promptly to the chief compliance... designated in the code of ethics; and (v) provisions requiring the investment advisor to provide each of the...

  4. ADPAC v1.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.

    1999-01-01

    The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.

  5. Slow Decomposition of Silicone Rubber.

    DTIC Science & Technology

    1982-09-01

    6i 0 20 0 0 -c CA soa ,~ I -- 00 N - C,,l I 21. DYN 6181 DISTRIBUTION LIST No. Cooies No. Cooies Dr. L.V. Schmidt 1 Or. F. Roberto 1 Assistant...Scientific Dr. A.L. Slafkosky 1 Research Scientific Advisor Directorate of Aerosoace Sciences Commandant of the Marine Corps Bolling Air Force Base Code...Research Research Code 413 Directorate of Chemical Sciences Arlington, VA 22217 Bolling Air Force Base Washington, D.C. 20332 M r . Da v id S i e g e lD r J

  6. Proceedings of the Spacecraft Charging Technology Conference Held in Monterey, California on 31 October - 3 November 1989. Volume 2

    DTIC Science & Technology

    1989-11-01

    STATEMENT 12b. DISTRIBUTION CODE Approved for public release; distribution unlimited 13. ABSTRACT (Maximum 200 words) The Spacecraft Charging... Distribution I D Availability Codes ’ Avail and/orDist Special VIvt PREFACE The Spacecraft Charging Technology conference was held at the Naval... distribution , the spacecraft will charge negatively during this time according to dV/dt = 47ta 2 Jth ev/° / C whose solution is V/0= - ln(l + t/t) "t = C 0

  7. SINGER: A Computer Code for General Analysis of Two-Dimensional Reinforced Concrete Structures. Volume 1. Solution Process

    DTIC Science & Technology

    1975-05-01

    Conference on Earthquake Engineering, Santiago de Chile, 13-18 January 1969, Vol. I , Session B2, Chilean Association oil Seismology and Earth- quake...Nuclear Agency May 1975 DISTRIBUTED BY: KJ National Technical Information Service U. S. DEPARTMENT OF COMMERCE ^804J AFWL-TR-74-228, Vol. I ...CM o / i ’•fu.r ) V V AFWL-TR- 74-228 Vol. I SINGER: A COMPUTER CODE FOR GENERAL ANALYSIS OF TWO-DIMENSIONAL CONCRETE STRUCTURES Volum« I

  8. Atomistic Simulations of Surface Cross-Slip Nucleation in Face-Centered Cubic Nickel and Copper (Postprint)

    DTIC Science & Technology

    2013-02-15

    molecular dynamics code, LAMMPS [9], developed at Sandia National Laboratory. The simulation cell is a rectangular parallelepiped, with the z-axis...with assigned energies within LAMMPs of greater than 4.42 eV (Ni) or 3.52 eV (Cu) (the energy of atoms in the stacking fault region), the partial...molecular dynamics code LAMMPS , which was developed at Sandia National Laboratory by Dr. Steve Plimpton and co-workers. This work was supported by the

  9. Performance analysis of an OAM multiplexing-based MIMO FSO system over atmospheric turbulence using space-time coding with channel estimation.

    PubMed

    Zhang, Yan; Wang, Ping; Guo, Lixin; Wang, Wei; Tian, Hongxin

    2017-08-21

    The average bit error rate (ABER) performance of an orbital angular momentum (OAM) multiplexing-based free-space optical (FSO) system with multiple-input multiple-output (MIMO) architecture has been investigated over atmospheric turbulence considering channel estimation and space-time coding. The impact of different types of space-time coding, modulation orders, turbulence strengths, receive antenna numbers on the transmission performance of this OAM-FSO system is also taken into account. On the basis of the proposed system model, the analytical expressions of the received signals carried by the k-th OAM mode of the n-th receive antenna for the vertical bell labs layered space-time (V-Blast) and space-time block codes (STBC) are derived, respectively. With the help of channel estimator carrying out with least square (LS) algorithm, the zero-forcing criterion with ordered successive interference cancellation criterion (ZF-OSIC) equalizer of V-Blast scheme and Alamouti decoder of STBC scheme are adopted to mitigate the performance degradation induced by the atmospheric turbulence. The results show that the ABERs obtained by channel estimation have excellent agreement with those of turbulence phase screen simulations. The ABERs of this OAM multiplexing-based MIMO system deteriorate with the increase of turbulence strengths. And both V-Blast and STBC schemes can significantly improve the system performance by mitigating the distortions of atmospheric turbulence as well as additive white Gaussian noise (AWGN). In addition, the ABER performances of both space-time coding schemes can be further enhanced by increasing the number of receive antennas for the diversity gain and STBC outperforms V-Blast in this system for data recovery. This work is beneficial to the OAM FSO system design.

  10. Seismology software: state of the practice

    NASA Astrophysics Data System (ADS)

    Smith, W. Spencer; Zeng, Zheng; Carette, Jacques

    2018-05-01

    We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools.

  11. Seismology software: state of the practice

    NASA Astrophysics Data System (ADS)

    Smith, W. Spencer; Zeng, Zheng; Carette, Jacques

    2018-02-01

    We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools.

  12. 26 CFR 31.3121(v)(2)-1 - Treatment of amounts deferred under certain nonqualified deferred compensation plans.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... nonqualified deferred compensation plans. 31.3121(v)(2)-1 Section 31.3121(v)(2)-1 Internal Revenue INTERNAL... (Chapter 21, Internal Revenue Code of 1954) General Provisions § 31.3121(v)(2)-1 Treatment of amounts... under a nonqualified deferred compensation plan within the meaning of section 3121(v)(2) and this...

  13. 26 CFR 31.3121(v)(2) -1 - Treatment of amounts deferred under certain nonqualified deferred compensation plans.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... nonqualified deferred compensation plans. 31.3121(v)(2) -1 Section 31.3121(v)(2) -1 Internal Revenue INTERNAL... (Chapter 21, Internal Revenue Code of 1954) General Provisions § 31.3121(v)(2)-1 Treatment of amounts... under a nonqualified deferred compensation plan within the meaning of section 3121(v)(2) and this...

  14. 26 CFR 31.3121(v)(2)-1 - Treatment of amounts deferred under certain nonqualified deferred compensation plans.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... nonqualified deferred compensation plans. 31.3121(v)(2)-1 Section 31.3121(v)(2)-1 Internal Revenue INTERNAL... (Chapter 21, Internal Revenue Code of 1954) General Provisions § 31.3121(v)(2)-1 Treatment of amounts... under a nonqualified deferred compensation plan within the meaning of section 3121(v)(2) and this...

  15. McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Stedry, M.H.

    1994-07-01

    McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less

  16. 3D-radiative transfer in terrestrial atmosphere: An efficient parallel numerical procedure

    NASA Astrophysics Data System (ADS)

    Bass, L. P.; Germogenova, T. A.; Nikolaeva, O. V.; Kokhanovsky, A. A.; Kuznetsov, V. S.

    2003-04-01

    Light propagation and scattering in terrestrial atmosphere is usually studied in the framework of the 1D radiative transfer theory [1]. However, in reality particles (e.g., ice crystals, solid and liquid aerosols, cloud droplets) are randomly distributed in 3D space. In particular, their concentrations vary both in vertical and horizontal directions. Therefore, 3D effects influence modern cloud and aerosol retrieval procedures, which are currently based on the 1D radiative transfer theory. It should be pointed out that the standard radiative transfer equation allows to study these more complex situations as well [2]. In recent year the parallel version of the 2D and 3D RADUGA code has been developed. This version is successfully used in gammas and neutrons transport problems [3]. Applications of this code to radiative transfer in atmosphere problems are contained in [4]. Possibilities of code RADUGA are presented in [5]. The RADUGA code system is an universal solver of radiative transfer problems for complicated models, including 2D and 3D aerosol and cloud fields with arbitrary scattering anisotropy, light absorption, inhomogeneous underlying surface and topography. Both delta type and distributed light sources can be accounted for in the framework of the algorithm developed. The accurate numerical procedure is based on the new discrete ordinate SWDD scheme [6]. The algorithm is specifically designed for parallel supercomputers. The version RADUGA 5.1(P) can run on MBC1000M [7] (768 processors with 10 Gb of hard disc memory for each processor). The peak productivity is equal 1 Tfl. Corresponding scalar version RADUGA 5.1 is working on PC. As a first example of application of the algorithm developed, we have studied the shadowing effects of clouds on neighboring cloudless atmosphere, depending on the cloud optical thickness, surface albedo, and illumination conditions. This is of importance for modern satellite aerosol retrieval algorithms development. [1] Sobolev, V. V., 1972: Light scattering in planetary atmosphere, M.:Nauka. [2] Evans, K. F., 1998: The spherical harmonic discrete ordinate method for three dimensional atmospheric radiative transfer, J. Atmos. Sci., 55, 429 446. [3] L.P. Bass, T.A. Germogenova, V.S. Kuznetsov, O.V. Nikolaeva. RADUGA 5.1 and RADUGA 5.1(P) codes for stationary transport equation solution in 2D and 3D geometries on one and multiprocessors computers. Report on seminar “Algorithms and Codes for neutron physical of nuclear reactor calculations” (Neutronica 2001), Obninsk, Russia, 30 October 2 November 2001. [4] T.A. Germogenova, L.P. Bass, V.S. Kuznetsov, O.V. Nikolaeva. Mathematical modeling on parallel computers solar and laser radiation transport in 3D atmosphere. Report on International Symposium CIS countries “Atmosphere radiation”, 18 21 June 2002, St. Peterburg, Russia, p. 15 16. [5] L.P. Bass, T.A. Germogenova, O.V. Nikolaeva, V.S. Kuznetsov. Radiative Transfer Universal 2D 3D Code RADUGA 5.1(P) for Multiprocessor Computer. Abstract. Poster report on this Meeting. [6] L.P. Bass, O.V. Nikolaeva. Correct calculation of Angular Flux Distribution in Strongly Heterogeneous Media and Voids. Proc. of Joint International Conference on Mathematical Methods and Supercomputing for Nuclear Applications, Saratoga Springs, New York, October 5 9, 1997, p. 995 1004. [7] http://www/jscc.ru

  17. Air Traffic Controller Working Memory: Considerations in Air Traffic Control Tactical Operations

    DTIC Science & Technology

    1993-09-01

    INFORMATION PROCESSING SYSTEM 3 2. AIR TRAFFIC CONTROLLER MEMORY 5 2.1 MEMORY CODES 6 21.1 Visual Codes 7 2.1.2 Phonetic Codes 7 2.1.3 Semantic Codes 8...raise an awareness of the memory re- quirements of ATC tactical operations by presenting information on working memory processes that are relevant to...working v memory permeates every aspect of the controller’s ability to process air traffic information and control live traffic. The

  18. Measurement of pion induced neutron-production double-differential cross sections on Fe and Pb at 870 MeV and 2.1 GeV

    NASA Astrophysics Data System (ADS)

    Iwamoto, Y.; Shigyo, N.; Satoh, D.; Kunieda, S.; Watanabe, T.; Ishimoto, S.; Tenzou, H.; Maehata, K.; Ishibashi, K.; Nakamoto, T.; Numajiri, M.; Meigo, S.; Takada, H.

    2004-08-01

    Neutron-production double-differential cross sections for 870 MeV π+ and π- and 2.1 GeV π+ mesons incident on iron and lead targets were measured with NE213 liquid scintillators by time-of-flight technique. NE213 liquid scintillators 12.7 cm in diameter and 12.7 cm thick were placed in directions of 15, 30, 60, 90, 120, and 150° . The typical flight path length was 1.5 m . Neutron detection efficiencies were evaluated by calculation results of SCINFUL and CECIL codes. The experimental results were compared with JAERI quantum molecular dynamics code. For the meson incident reactions, adoption of NN in-medium effects was slightly useful for reproducing 870 MeV π+ -incident neutron yields at neutron energies of 10 30 MeV , as was the case for proton incident reactions. The π- incident reaction generates more neutrons than π+ incidence as the number of nucleons in targets decrease.

  19. DIANA-LncBase v2: indexing microRNA targets on non-coding transcripts

    PubMed Central

    Paraskevopoulou, Maria D.; Vlachos, Ioannis S.; Karagkouni, Dimitra; Georgakilas, Georgios; Kanellos, Ilias; Vergoulis, Thanasis; Zagganas, Konstantinos; Tsanakas, Panayiotis; Floros, Evangelos; Dalamagas, Theodore; Hatzigeorgiou, Artemis G.

    2016-01-01

    microRNAs (miRNAs) are short non-coding RNAs (ncRNAs) that act as post-transcriptional regulators of coding gene expression. Long non-coding RNAs (lncRNAs) have been recently reported to interact with miRNAs. The sponge-like function of lncRNAs introduces an extra layer of complexity in the miRNA interactome. DIANA-LncBase v1 provided a database of experimentally supported and in silico predicted miRNA Recognition Elements (MREs) on lncRNAs. The second version of LncBase (www.microrna.gr/LncBase) presents an extensive collection of miRNA:lncRNA interactions. The significantly enhanced database includes more than 70 000 low and high-throughput, (in)direct miRNA:lncRNA experimentally supported interactions, derived from manually curated publications and the analysis of 153 AGO CLIP-Seq libraries. The new experimental module presents a 14-fold increase compared to the previous release. LncBase v2 hosts in silico predicted miRNA targets on lncRNAs, identified with the DIANA-microT algorithm. The relevant module provides millions of predicted miRNA binding sites, accompanied with detailed metadata and MRE conservation metrics. LncBase v2 caters information regarding cell type specific miRNA:lncRNA regulation and enables users to easily identify interactions in 66 different cell types, spanning 36 tissues for human and mouse. Database entries are also supported by accurate lncRNA expression information, derived from the analysis of more than 6 billion RNA-Seq reads. PMID:26612864

  20. 75 FR 54676 - Self-Regulatory Organizations; Notice of Filing of Proposed Rule Change by NYSE Arca, Inc...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-08

    ... Advisers Act Rule 204A-1. This Rule specifically requires the adoption of a code of ethics by an investment...) provisions requiring supervised persons to report any violations of the code of ethics promptly to the chief... designated in the code of ethics; and (v) provisions requiring the investment advisor to provide each of the...

  1. STATEQ: a nonlinear least-squares code for obtaining Martin thermodynamic representations of fluids in the gaseous and dense gaseous regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milora, S. L.

    1976-02-01

    The use of the code NLIN (IBM Share Program No. 1428) to obtain empirical thermodynamic pressure-volume-temperature (P-V-T) relationships for substances in the gaseous and dense gaseous states is described. When sufficient experimental data exist, the code STATEQ will provide least-squares estimates for the 21 parameters of the Martin model. Another code, APPROX, is described which also obtains parameter estimates for the model by making use of the approximate generalized behavior of fluids. Use of the codes is illustrated in obtaining thermodynamic representations for isobutane. (auth)

  2. Biases in detection of apparent “weekend effect” on outcome with administrative coding data: population based study of stroke

    PubMed Central

    Li, Linxin

    2016-01-01

    Objectives To determine the accuracy of coding of admissions for stroke on weekdays versus weekends and any impact on apparent outcome. Design Prospective population based stroke incidence study and a scoping review of previous studies of weekend effects in stroke. Setting Primary and secondary care of all individuals registered with nine general practices in Oxfordshire, United Kingdom (OXVASC, the Oxford Vascular Study). Participants All patients with clinically confirmed acute stroke in OXVASC identified with multiple overlapping methods of ascertainment in 2002-14 versus all acute stroke admissions identified by hospital diagnostic and mortality coding alone during the same period. Main outcomes measures Accuracy of administrative coding data for all patients with confirmed stroke admitted to hospital in OXVASC. Difference between rates of “false positive” or “false negative” coding for weekday and weekend admissions. Impact of inaccurate coding on apparent case fatality at 30 days in weekday versus weekend admissions. Weekend effects on outcomes in patients with confirmed stroke admitted to hospital in OXVASC and impacts of other potential biases compared with those in the scoping review. Results Among 92 728 study population, 2373 episodes of acute stroke were ascertained in OXVASC, of which 826 (34.8%) mainly minor events were managed without hospital admission, 60 (2.5%) occurred out of the area or abroad, and 195 (8.2%) occurred in hospital during an admission for a different reason. Of 1292 local hospital admissions for acute stroke, 973 (75.3%) were correctly identified by administrative coding. There was no bias in distribution of weekend versus weekday admission of the 319 strokes missed by coding. Of 1693 admissions for stroke identified by coding, 1055 (62.3%) were confirmed to be acute strokes after case adjudication. Among the 638 false positive coded cases, patients were more likely to be admitted on weekdays than at weekends (536 (41.0%) v 102 (26.5%); P<0.001), partly because of weekday elective admissions after previous stroke being miscoded as new stroke episodes (267 (49.8%) v 26 (25.5%); P<0.001). The 30 day case fatality after these elective admissions was lower than after confirmed acute stroke admissions (11 (3.8%) v 233 (22.1%); P<0.001). Consequently, relative 30 day case fatality for weekend versus weekday admissions differed (P<0.001) between correctly coded acute stroke admissions and false positive coding cases. Results were consistent when only the 1327 emergency cases identified by “admission method” from coding were included, with more false positive cases with low case fatality (35 (14.7%)) being included for weekday versus weekend admissions (190 (19.5%) v 48 (13.7%), P<0.02). Among all acute stroke admissions in OXVASC, there was no imbalance in baseline stroke severity for weekends versus weekdays and no difference in case fatality at 30 days (adjusted odds ratio 0.85, 95% confidence interval 0.63 to 1.15; P=0.30) or any adverse “weekend effect” on modified Rankin score at 30 days (0.78, 0.61 to 0.99; P=0.04) or one year (0.76, 0.59 to 0.98; P=0.03) among incident strokes. Conclusion Retrospective studies of UK administrative hospital coding data to determine “weekend effects” on outcome in acute medical conditions, such as stroke, can be undermined by inaccurate coding, which can introduce biases that cannot be reliably dealt with by adjustment for case mix. PMID:27185754

  3. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Signal optimization and analysis using PASSER V-07 : training workshop: code IPR006.

    DOT National Transportation Integrated Search

    2011-01-01

    The objective of this project was to conduct one pilot workshop and five regular workshops to teach the effective use of the enhanced PASSER V-07 arterial signal timing optimization software. PASSER V-07 and materials for conducting a one-day trainin...

  5. Description and Use of the Plume Radiation Code ATLES

    DTIC Science & Technology

    1977-05-13

    CONTAINED IN THIS DOCUMENT ARE THOSE 2OF THE AUTHORS AND SHOULD NOT BE INTERPRETED AS NECESSARILY REPRESENTING THE OFFICIAL POLICIES , EITHER EXPRESSED OR...0 o N C?) -4 0 N N cnCD 0 C0 0 *N N N N R N N N I N N N N A V A V A V V V V A V V V V 40 0 )0 0 00 cn 0 C 0 ON M N3 VA A A i * 0 C) )c C ’V0 N 0 N: C

  6. Source Methodology for Turbofan Noise Prediction (SOURCE3D Technical Documentation)

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This report provides the analytical documentation for the SOURCE3D Rotor Wake/Stator Interaction Code. It derives the equations for the rotor scattering coefficients and stator source vector and scattering coefficients that are needed for use in the TFANS (Theoretical Fan Noise Design/Prediction System). SOURCE3D treats the rotor and stator as isolated source elements. TFANS uses this information, along with scattering coefficients for inlet and exit elements, and provides complete noise solutions for turbofan engines. SOURCE3D is composed of a collection of FORTRAN programs that have been obtained by extending the approach of the earlier V072 Rotor Wake/Stator Interaction Code. Similar to V072, it treats the rotor and stator as a collection of blades and vanes having zero thickness and camber contained in an infinite, hardwall annular duct. SOURCE3D adds important features to the V072 capability-a rotor element, swirl flow and vorticity waves, actuator disks for flow turning, and combined rotor/actuator disk and stator/actuator disk elements. These items allow reflections from the rotor, frequency scattering, and mode trapping, thus providing more complete noise predictions than previously. The code has been thoroughly verified through comparison with D.B. Hanson's CUP2D two- dimensional code using a narrow annulus test case.

  7. Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.

    2002-09-11

    The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions ofmore » a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  9. EPA Office of Water (OW): Nonpoint Source Projects NHDPlus Indexed Dataset

    EPA Pesticide Factsheets

    GRTS locational data for nonpoint source projects. GRTS locations are coded onto NHDPlus v2.1 flowline features to create point and line events or coded onto NHDPlus v2.1 waterbody features to create area events. In addition to NHDPlus reach indexed data there may also be custom events (point, line or area) that are not associated with NHD and are in an EPA standard format that is compatible with EPA's Reach Address Database. Custom events are used to represent GRTS locations that are not represented well in NHDPlus.

  10. On the effect of updated MCNP photon cross section data on the simulated response of the HPA TLD.

    PubMed

    Eakins, Jonathan

    2009-02-01

    The relative response of the new Health Protection Agency thermoluminescence dosimeter (TLD) has been calculated for Narrow Series X-ray distribution and (137)Cs photon sources using the Monte Carlo code MCNP5, and the results compared with those obtained during its design stage using the predecessor code, MCNP4c2. The results agreed at intermediate energies (approximately 0.1 MeV to (137)Cs), but differed at low energies (<0.1 MeV) by up to approximately 10%. This disparity has been ascribed to differences in the default photon interaction data used by the two codes, and derives ultimately from the effect on absorbed dose of the recent updates to the photoelectric cross sections. The sources of these data have been reviewed.

  11. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasso, A.; Ferrari, A.; Ferrari, A.

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, andmore » with the SLAC data.« less

  12. Factor Structure and Incremental Validity of the Enhanced Computer- Administered Tests

    DTIC Science & Technology

    1992-07-01

    performance in the mechanical maintenance specialties. 14. SUBJECT TERMS Aptitude tests, ASVAB (Armed services vocational aptitude battery), CAT ...Code 11) Attn: Dir, Personnel Systems (Code 12) Attn: Dir, Testing Systems (Code 13) Attn: CAT /ASVABPMO FJB1 COMNAVCRUITCOM FT1 CNET V8 CG MCRD...test, a computerized adaptive testing version of the ASVAB ( CAT -ASVAB), the psychomotor portion of the General Aptitude Test Battery (GATB), and the

  13. Initial applications of the non-Maxwellian extension of the full-wave TORIC v.5 code in the mid/high harmonic and minority heating regimes

    NASA Astrophysics Data System (ADS)

    Bertelli, N.; Valeo, E. J.; Phillips, C. K.

    2015-11-01

    A non Maxwellian extension of the full wave TORIC v.5 code in the mid/high harmonic and minority heating regimes has been revisited. In both regimes the treatment of the non-Maxwellian ions is needed in order to improve the analysis of combined fast wave (FW) and neutral beam injection (NBI) heated discharges in the current fusion devices. Additionally, this extension is also needed in time-dependent analysis where the combined heating experiments are generally considered. Initial numerical cases with thermal ions and with a non-Maxwellian ions are presented for both regimes. The simulations are then compared with results from the AORSA code, which has already been extended to include non-Maxwellian ions. First attempts to apply this extension in a self-consistent way with the NUBEAM module, which is included in the TRANSP code, are also discussed. Work supported by US DOE Contracts # DE-FC02-01ER54648 and DE-AC02-09CH11466.

  14. Amino-terminal domain of the v-fms oncogene product includes a functional signal peptide that directs synthesis of a transforming glycoprotein in the absence of feline leukemia virus gag sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, E.F.; Roussel, M.F.; Hampe, A.

    1986-08-01

    The nucleotide sequence of a 5' segment of the human genomic c-fms proto-oncogene suggested that recombination between feline leukemia virus and feline c-fms sequences might have occurred in a region encoding the 5' untranslated portion of c-fms mRNA. The polyprotein precursor gP180/sup gag-fms/ encoded by the McDonough strain of feline sarcoma virus was therefore predicted to contain 34 v-fms-coded amino acids derived from sequences of the c-fms gene that are not ordinarily translated from the proto-oncogene mRNA. The (gP180/sup gag-fms/) polyprotein was cotranslationally cleaved near the gag-fms junction to remove its gag gene-coded portion. Determination of the amino-terminal sequence ofmore » the resulting v-fms-coded glycoprotein, gp120/sup v-fms/, showed that the site of proteolysis corresponded to a predicted signal peptidase cleavage site within the c-fms gene product. Together, these analyses suggested that the linked gag sequences may not be necessary for expression of a biologically active v-fms gene product. The gag-fms sequences of feline sarcoma virus strain McDonough and the v-fms sequences alone were inserted into a murine retroviral vector containing a neomycin resistance gene. The authors conclude that a cryptic hydrophobic signal peptide sequence in v-fms was unmasked by gag deletion, thereby allowing the correct orientation and transport of the v-fms was unmasked by gag deletion, thereby allowing the correct orientation and transport of the v-fms gene product within membranous organelles. It seems likely that the proteolytic cleavage of gP180/gag-fms/ is mediated by signal peptidase and that the amino termini of gp140/sup v-fms/ and the c-fms gene product are identical.« less

  15. Photoionization and High Density Gas

    NASA Technical Reports Server (NTRS)

    Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.

  16. Coding of Border Ownership in Monkey Visual Cortex

    PubMed Central

    Zhou, Hong; Friedman, Howard S.; von der Heydt, Rüdiger

    2016-01-01

    Areas V1 and V2 of the visual cortex have traditionally been conceived as stages of local feature representations. We investigated whether neural responses carry information about how local features belong to objects. Single-cell activity was recorded in areas V1, V2, and V4 of awake behaving monkeys. Displays were used in which the same local feature (contrast edge or line) could be presented as part of different figures. For example, the same light–dark edge could be the left side of a dark square or the right side of a light square. Each display was also presented with reversed contrast. We found significant modulation of responses as a function of the side of the figure in >50% of neurons of V2 and V4 and in 18% of neurons of the top layers of V1. Thus, besides the local contrast border information, neurons were found to encode the side to which the border belongs (“border ownership coding”). A majority of these neurons coded border ownership and the local polarity of luminance–chromaticity contrast. The others were insensitive to contrast polarity. Another 20% of the neurons of V2 and V4, and 48% of top layer V1, coded local contrast polarity, but not border ownership. The border ownership-related response differences emerged soon (<25 msec) after the response onset. In V2 and V4, the differences were found to be nearly independent of figure size up to the limit set by the size of our display (21°). Displays that differed only far outside the conventional receptive field could produce markedly different responses. When tested with more complex displays in which figure-ground cues were varied, some neurons produced invariant border ownership signals, others failed to signal border ownership for some of the displays, but neurons that reversed signals were rare. The influence of visual stimulation far from the receptive field center indicates mechanisms of global context integration. The short latencies and incomplete cue invariance suggest that the border-ownership effect is generated within the visual cortex rather than projected down from higher levels. PMID:10964965

  17. A Monte Carlo study on {sup 223}Ra imaging for unsealed radionuclide therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Akihiko, E-mail: takahsr@hs.med.kyushu-u.ac.jp; Miwa, Kenta; Sasaki, Masayuki

    Purpose: Radium-223 ({sup 223}Ra), an α-emitting radionuclide, is used in unsealed radionuclide therapy for metastatic bone tumors. The demand for qualitative {sup 223}Ra imaging is growing to optimize dosimetry. The authors simulated {sup 223}Ra imaging using an in-house Monte Carlo simulation code and investigated the feasibility and utility of {sup 223}Ra imaging. Methods: The Monte Carlo code comprises two modules, HEXAGON and NAI. The HEXAGON code simulates the photon and electron interactions in the tissues and collimator, and the NAI code simulates the response of the NaI detector system. A 3D numeric phantom created using computed tomography images of amore » chest phantom was installed in the HEXAGON code. {sup 223}Ra accumulated in a part of the spine, and three x-rays and 19 γ rays between 80 and 450 keV were selected as the emitted photons. To evaluate the quality of the {sup 223}Ra imaging, the authors also simulated technetium-99m ({sup 99m}Tc) imaging under the same conditions and compared the results. Results: The sensitivities of the three photopeaks were 147 counts per unit of source activity (cps MBq{sup −1}; photopeak: 84 keV, full width of energy window: 20%), 166 cps MBq{sup −1} (154 keV, 15%), and 158 cps MBq{sup −1} (270 keV, 10%) for a low-energy general-purpose (LEGP) collimator, and those for the medium-energy general-purpose (MEGP) collimator were 33, 13, and 8.0 cps MBq{sup −1}, respectively. In the case of {sup 99m}Tc, the sensitivity was 55 cps MBq{sup −1} (141 keV, 20%) for LEGP and 52 cps MBq{sup −1} for MEGP. The fractions of unscattered photons of the total photons reflecting the image quality were 0.09 (84 keV), 0.03 (154 keV), and 0.02 (270 keV) for the LEGP collimator and 0.41, 0.25, and 0.50 for the MEGP collimator, respectively. Conversely, this fraction was approximately 0.65 for the simulated {sup 99m}Tc imaging. The sensitivity with the LEGP collimator appeared very high. However, almost all of the counts were because of photons that penetrated or were scattered in the collimator; therefore, the proportions of unscattered photons were small. Conclusions: Their simulation study revealed that the most promising scheme for {sup 223}Ra imaging is an 84-keV window using an MEGP collimator. The sensitivity of the photopeaks above 100 keV is too low for {sup 223}Ra imaging. A comparison of the fractions of unscattered photons reveals that the sensitivity and image quality are approximately two-thirds of those for {sup 99m}Tc imaging.« less

  18. Radioactive ion beams produced by neutron-induced fission at ISOLDE

    NASA Astrophysics Data System (ADS)

    Catherall, R.; Lettry, J.; Gilardoni, S.; Köster, U.; Isolde Collaboration

    2003-05-01

    The production rates of neutron-rich fission products for the next-generation radioactive beam facility EURISOL [EU-RTD Project EURISOL (HPRI-CT-1999-50001)] are mainly limited by the maximum amount of power deposited by protons in the target. An alternative approach is to use neutron beams to induce fission in actinide targets. This has the advantage of reducing: the energy deposited by the proton beam in the target; contamination from neutron-deficient isobars that would be produced by spallation; and mechanical stress on the target. At ISOLDE CERN [E. Kugler, Hyperfine Interact. 129 (2000) 23], tests have been made on standard ISOLDE actinide targets using fast-neutron bunches produced by bombarding thick, high- Z metal converters with 1 and 1.4 GeV proton pulses. This paper reviews the first applications of converters used at ISOLDE. It highlights the different geometries and the techniques used to compare fission yields produced by the proton beam directly on the target with neutron-induced fission. Results from the six targets already tested, namely UC 2/graphite and ThO 2 targets with tungsten and tantalum converters, are presented. To gain further knowledge for the design of a dedicated target as required by the TARGISOL project [EU-RTD Project TARGISOL (HPRI-CT-2001-50033)], the results are compared to simulations, using the MARS [N.V. Mokhov, S.I. Striganov, A. Van Ginneken, S.G. Mashnik, A.J. Sierk, J. Ranft, MARS code developments, in: 4th Workshop on Simulating Accelerator Radiation Environments, SARE-4, Knoxville, USA, 14-15.9.1998, FERMILAB-PUB-98-379, nucl-th/9812038; N.V. Mokhov, The Mars Code System User's Guide, Fermilab-FN-628, 1995; N.V. Mokhov, MARS Code Developments, Benchmarking and Applications, Fermilab-Conf-00-066, 2000; O.E. Krivosheev, N.V. Mokhov, A New MARS and its Applications, Fermilab-Conf-98/43, 1998] code interfaced with MCNP [J.S. Hendrics, MCNP4C LANL Memo X-5; JSH-2000-3; J.F. Briemesteir (Ed.), MCNP - A General Montecarlo N-Particle Transport Code, Version 4C, LA-13709-M] libraries, of the neutron flux from the converters interacting with the actinide targets.

  19. Radioactive ion beams produced by neutron-induced fission at ISOLDE

    NASA Astrophysics Data System (ADS)

    Isolde Collaboration; Catherall, R.; Lettry, J.; Gilardoni, S.; Köster, U.

    2003-05-01

    The production rates of neutron-rich fission products for the next-generation radioactive beam facility EURISOL [EU-RTD Project EURISOL (HPRI-CT-1999-50001)] are mainly limited by the maximum amount of power deposited by protons in the target. An alternative approach is to use neutron beams to induce fission in actinide targets. This has the advantage of reducing: the energy deposited by the proton beam in the target; contamination from neutron-deficient isobars that would be produced by spallation; and mechanical stress on the target. At ISOLDE CERN [E. Kugler, Hyperfine Interact. 129 (2000) 23], tests have been made on standard ISOLDE actinide targets using fast-neutron bunches produced by bombarding thick, high-/Z metal converters with 1 and 1.4 GeV proton pulses. This paper reviews the first applications of converters used at ISOLDE. It highlights the different geometries and the techniques used to compare fission yields produced by the proton beam directly on the target with neutron-induced fission. Results from the six targets already tested, namely UC2/graphite and ThO2 targets with tungsten and tantalum converters, are presented. To gain further knowledge for the design of a dedicated target as required by the TARGISOL project [EU-RTD Project TARGISOL (HPRI-CT-2001-50033)], the results are compared to simulations, using the MARS [N.V. Mokhov, S.I. Striganov, A. Van Ginneken, S.G. Mashnik, A.J. Sierk, J. Ranft, MARS code developments, in: 4th Workshop on Simulating Accelerator Radiation Environments, SARE-4, Knoxville, USA, 14-15.9.1998, FERMILAB-PUB-98-379, nucl-th/9812038; N.V. Mokhov, The Mars Code System User's Guide, Fermilab-FN-628, 1995; N.V. Mokhov, MARS Code Developments, Benchmarking and Applications, Fermilab-Conf-00-066, 2000; O.E. Krivosheev, N.V. Mokhov, A New MARS and its Applications, Fermilab-Conf-98/43, 1998] code interfaced with MCNP [J.S. Hendrics, MCNP4C LANL Memo X-5; JSH-2000-3; J.F. Briemesteir (Ed.), MCNP - A General Montecarlo N-Particle Transport Code, Version 4C, LA-13709-M] libraries, of the neutron flux from the converters interacting with the actinide targets.

  20. 5 CFR 550.103 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL... subchapter V of chapter 55 of title 5, United States Code. Basic workweek, for full-time employees, means the... Foreign Service primary skill code of 2501; (4) Who is a special agent in the Diplomatic Security Service...

  1. Track structure in radiation biology: theory and applications.

    PubMed

    Nikjoo, H; Uehara, S; Wilson, W E; Hoshi, M; Goodhead, D T

    1998-04-01

    A brief review is presented of the basic concepts in track structure and the relative merit of various theoretical approaches adopted in Monte-Carlo track-structure codes are examined. In the second part of the paper, a formal cluster analysis is introduced to calculate cluster-distance distributions. Total experimental ionization cross-sections were least-square fitted and compared with the calculation by various theoretical methods. Monte-Carlo track-structure code Kurbuc was used to examine and compare the spectrum of the secondary electrons generated by using functions given by Born-Bethe, Jain-Khare, Gryzinsky, Kim-Rudd, Mott and Vriens' theories. The cluster analysis in track structure was carried out using the k-means method and Hartigan algorithm. Data are presented on experimental and calculated total ionization cross-sections: inverse mean free path (IMFP) as a function of electron energy used in Monte-Carlo track-structure codes; the spectrum of secondary electrons generated by different functions for 500 eV primary electrons; cluster analysis for 4 MeV and 20 MeV alpha-particles in terms of the frequency of total cluster energy to the root-mean-square (rms) radius of the cluster and differential distance distributions for a pair of clusters; and finally relative frequency distribution for energy deposited in DNA, single-strand break and double-strand breaks for 10MeV/u protons, alpha-particles and carbon ions. There are a number of Monte-Carlo track-structure codes that have been developed independently and the bench-marking presented in this paper allows a better choice of the theoretical method adopted in a track-structure code to be made. A systematic bench-marking of cross-sections and spectra of the secondary electrons shows differences between the codes at atomic level, but such differences are not significant in biophysical modelling at the macromolecular level. Clustered-damage evaluation shows: that a substantial proportion of dose ( 30%) is deposited by low-energy electrons; the majority of DNA damage lesions are of simple type; the complexity of damage increases with increased LET, while the total yield of strand breaks remains constant; and at high LET values nearly 70% of all double-strand breaks are of complex type.

  2. DIANA-LncBase v2: indexing microRNA targets on non-coding transcripts.

    PubMed

    Paraskevopoulou, Maria D; Vlachos, Ioannis S; Karagkouni, Dimitra; Georgakilas, Georgios; Kanellos, Ilias; Vergoulis, Thanasis; Zagganas, Konstantinos; Tsanakas, Panayiotis; Floros, Evangelos; Dalamagas, Theodore; Hatzigeorgiou, Artemis G

    2016-01-04

    microRNAs (miRNAs) are short non-coding RNAs (ncRNAs) that act as post-transcriptional regulators of coding gene expression. Long non-coding RNAs (lncRNAs) have been recently reported to interact with miRNAs. The sponge-like function of lncRNAs introduces an extra layer of complexity in the miRNA interactome. DIANA-LncBase v1 provided a database of experimentally supported and in silico predicted miRNA Recognition Elements (MREs) on lncRNAs. The second version of LncBase (www.microrna.gr/LncBase) presents an extensive collection of miRNA:lncRNA interactions. The significantly enhanced database includes more than 70 000 low and high-throughput, (in)direct miRNA:lncRNA experimentally supported interactions, derived from manually curated publications and the analysis of 153 AGO CLIP-Seq libraries. The new experimental module presents a 14-fold increase compared to the previous release. LncBase v2 hosts in silico predicted miRNA targets on lncRNAs, identified with the DIANA-microT algorithm. The relevant module provides millions of predicted miRNA binding sites, accompanied with detailed metadata and MRE conservation metrics. LncBase v2 caters information regarding cell type specific miRNA:lncRNA regulation and enables users to easily identify interactions in 66 different cell types, spanning 36 tissues for human and mouse. Database entries are also supported by accurate lncRNA expression information, derived from the analysis of more than 6 billion RNA-Seq reads. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekar, Kursat B.; Ibrahim, Ahmad M.

    2017-05-01

    This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less

  4. Identification and Differentiation of Verticillium Species and V. longisporum Lineages by Simplex and Multiplex PCR Assays

    PubMed Central

    Inderbitzin, Patrik; Davis, R. Michael; Bostock, Richard M.; Subbarao, Krishna V.

    2013-01-01

    Accurate species identification is essential for effective plant disease management, but is challenging in fungi including Verticillium sensu stricto (Ascomycota, Sordariomycetes, Plectosphaerellaceae), a small genus of ten species that includes important plant pathogens. Here we present fifteen PCR assays for the identification of all recognized Verticillium species and the three lineages of the diploid hybrid V. longisporum. The assays were based on DNA sequence data from the ribosomal internal transcribed spacer region, and coding and non-coding regions of actin, elongation factor 1-alpha, glyceraldehyde-3-phosphate dehydrogenase and tryptophan synthase genes. The eleven single target (simplex) PCR assays resulted in amplicons of diagnostic size for V. alfalfae, V. albo-atrum, V. dahliae including V. longisporum lineage A1/D3, V. isaacii, V. klebahnii, V. nonalfalfae, V. nubilum, V. tricorpus, V. zaregamsianum, and Species A1 and Species D1, the two undescribed ancestors of V. longisporum. The four multiple target (multiplex) PCR assays simultaneously differentiated the species or lineages within the following four groups: Verticillium albo-atrum, V. alfalfae and V. nonalfalfae; Verticillium dahliae and V. longisporum lineages A1/D1, A1/D2 and A1/D3; Verticillium dahliae including V. longisporum lineage A1/D3, V. isaacii, V. klebahnii and V. tricorpus; Verticillium isaacii, V. klebahnii and V. tricorpus. Since V. dahliae is a parent of two of the three lineages of the diploid hybrid V. longisporum, no simplex PCR assay is able to differentiate V. dahliae from all V. longisporum lineages. PCR assays were tested with fungal DNA extracts from pure cultures, and were not evaluated for detection and quantification of Verticillium species from plant or soil samples. The DNA sequence alignments are provided and can be used for the design of additional primers. PMID:23823707

  5. A new three-tier architecture design for multi-sphere neutron spectrometer with the FLUKA code

    NASA Astrophysics Data System (ADS)

    Huang, Hong; Yang, Jian-Bo; Tuo, Xian-Guo; Liu, Zhi; Wang, Qi-Biao; Wang, Xu

    2016-07-01

    The current commercially, available Bonner sphere neutron spectrometer (BSS) has high sensitivity to neutrons below 20 MeV, which causes it to be poorly placed to measure neutrons ranging from a few MeV to 100 MeV. The paper added moderator layers and the auxiliary material layer upon 3He proportional counters with FLUKA code, with a view to improve. The results showed that the responsive peaks to neutrons below 20 MeV gradually shift to higher energy region and decrease slightly with the increasing moderator thickness. On the contrary, the response for neutrons above 20 MeV was always very low until we embed auxiliary materials such as copper (Cu), lead (Pb), tungsten (W) into moderator layers. This paper chose the most suitable auxiliary material Pb to design a three-tier architecture multi-sphere neutron spectrometer (NBSS). Through calculating and comparing, the NBSS was advantageous in terms of response for 5-100 MeV and the highest response was 35.2 times the response of polyethylene (PE) ball with the same PE thickness.

  6. Design of the Experimental Exposure Conditions to Simulate Ionizing Radiation Effects on Candidate Replacement Materials for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery

    1998-01-01

    In this effort, experimental exposure times for monoenergetic electrons and protons were determined to simulate the space radiation environment effects on Teflon components of the Hubble Space Telescope. Although the energy range of the available laboratory particle accelerators was limited, optimal exposure times for 50 keV, 220 keV, 350 keV, and 500 KeV electrons were calculated that produced a dose-versus-depth profile that approximated the full spectrum profile, and were realizable with existing equipment. For the case of proton exposure, the limited energy range of the laboratory accelerator restricted simulation of the dose to a depth of .5 mil. Also, while optimal exposure times were found for 200 keV, 500 keV and 700 keV protons that simulated the full spectrum dose-versus-depth profile to this depth, they were of such short duration that the existing laboratory could not be controlled to within the required accuracy. In addition to the obvious experimental issues, other areas exist in which the analytical work could be advanced. Improved computer codes for the dose prediction- along with improved methodology for data input and output- would accelerate and make more accurate the calculational aspects. This is particularly true in the case of proton fluxes where a paucity of available predictive software appears to exist. The dated nature of many of the existing Monte Carlo particle/radiation transport codes raises the issue as to whether existing codes are sufficient for this type of analysis. Other areas that would result in greater fidelity of laboratory exposure effects to the space environment is the use of a larger number of monoenergetic particle fluxes and improved optimization algorithms to determine the weighting values.

  7. Design and feasibility of a multi-detector neutron spectrometer for radiation protection applications based on thermoluminescent 6LiF:Ti,Mg (TLD-600) detectors

    NASA Astrophysics Data System (ADS)

    Lis, M.; Gómez-Ros, J. M.; Bedogni, R.; Delgado, A.

    2008-01-01

    The design of a neutron detector with spectrometric capability based on thermoluminescent (TL) 6LiF:Ti,Mg (TLD-600) dosimeters located along three perpendicular axis within a single polyethylene (PE) sphere has been analyzed. The neutron response functions have been calculated in the energy range from 10 -8 to 100 MeV with the Monte Carlo (MC) code MCNPX 2.5 and their shape and behaviour have been used to discuss a suitable configuration for an actual instrument. The feasibility of such a device has been preliminary evaluated by the simulation of exposure to 241Am-Be, bare 252Cf and Fe-PE moderated 252Cf sources. The expected accuracy in the evaluation of energy quantities has been evaluated using the unfolding code FRUIT. The obtained results together with additional calculations performed using MAXED and GRAVEL codes show the spectrometric capability of the proposed design for radiation protection applications, especially in the range 1 keV-20 MeV.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceglio, N.M.; George, E.V.; Brooks, K.M.

    The first successful demonstration of high resolution, tomographic imaging of a laboratory plasma using coded imaging techniques is reported. ZPCI has been used to image the x-ray emission from laser compressed DT filled microballoons. The zone plate camera viewed an x-ray spectral window extending from below 2 keV to above 6 keV. It exhibited a resolution approximately 8 ..mu..m, a magnification factor approximately 13, and subtended a radiation collection solid angle at the target approximately 10/sup -2/ sr. X-ray images using ZPCI were compared with those taken using a grazing incidence reflection x-ray microscope. The agreement was excellent. In addition,more » the zone plate camera produced tomographic images. The nominal tomographic resolution was approximately 75 ..mu..m. This allowed three dimensional viewing of target emission from a single shot in planar ''slices''. In addition to its tomographic capability, the great advantage of the coded imaging technique lies in its applicability to hard (greater than 10 keV) x-ray and charged particle imaging. Experiments involving coded imaging of the suprathermal x-ray and high energy alpha particle emission from laser compressed microballoon targets are discussed.« less

  9. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  10. Predictive Coding in Area V4: Dynamic Shape Discrimination under Partial Occlusion

    PubMed Central

    Choi, Hannah; Pasupathy, Anitha; Shea-Brown, Eric

    2018-01-01

    The primate visual system has an exquisite ability to discriminate partially occluded shapes. Recent electrophysiological recordings suggest that response dynamics in intermediate visual cortical area V4, shaped by feedback from prefrontal cortex (PFC), may play a key role. To probe the algorithms that may underlie these findings, we build and test a model of V4 and PFC interactions based on a hierarchical predictive coding framework. We propose that probabilistic inference occurs in two steps. Initially, V4 responses are driven solely by bottom-up sensory input and are thus strongly influenced by the level of occlusion. After a delay, V4 responses combine both feedforward input and feedback signals from the PFC; the latter reflect predictions made by PFC about the visual stimulus underlying V4 activity. We find that this model captures key features of V4 and PFC dynamics observed in experiments. Specifically, PFC responses are strongest for occluded stimuli and delayed responses in V4 are less sensitive to occlusion, supporting our hypothesis that the feedback signals from PFC underlie robust discrimination of occluded shapes. Thus, our study proposes that area V4 and PFC participate in hierarchical inference, with feedback signals encoding top-down predictions about occluded shapes. PMID:29566355

  11. PHANTOM: Practical Oblivious Computation in a Secure Processor

    DTIC Science & Technology

    2014-05-16

    Utilizing Multiple FPGAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6 Implementation on the HC-2ex 50 6.1 Integration with a RISC -V...development of Phantom, Mohit also contributed to the code base, in particular with regard to the integration between the ORAM controller and the RISC -V...well. v Tremendous thanks is owed to the team that developed the RISC -V processor Phantom is using: among other contributors, this includes

  12. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.

  13. 76 FR 31393 - Federal Acquisition Regulation; Federal Acquisition Circular 2005-52; Introduction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-31

    ... 2010-017 Robinson. Ethics Programs. VI Technical Amendments... SUPPLEMENTARY INFORMATION: Summaries for... technology that is a commercial item. Item V--Oversight of Contractor Ethics Programs (FAR Case 2010-017... Code of Business Ethics and Conduct. Contracting officers may ask to see a contractor's code of ethics...

  14. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  15. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  16. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  17. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  18. 40 CFR 147.200 - State-administered program-Class I, III, IV, and V wells.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the program administered by the Arkansas Department of Pollution Control and Ecology approved by EPA... Code, Department of Pollution Control and Ecology, promulgated January 22, 1982; (4) General Rule and... Management Code, Department of Pollution Control and Ecology, promulgated August 21, 1981. (b) The Memorandum...

  19. Analyzing Prosocial Content on T.V.

    ERIC Educational Resources Information Center

    Davidson, Emily S.; Neale, John M.

    To enhance knowledge of television content, a prosocial code was developed by watching a large number of potentially prosocial television programs and making notes on all the positive acts. The behaviors were classified into a workable number of categories. The prosocial code is largely verbal and contains seven categories which fall into two…

  20. Verification and Validation Strategy for LWRS Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl M. Stoots; Richard R. Schultz; Hans D. Gougar

    2012-09-01

    One intension of the Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to create advanced computational tools for safety assessment that enable more accurate representation of a nuclear power plant safety margin. These tools are to be used to study the unique issues posed by lifetime extension and relicensing of the existing operating fleet of nuclear power plants well beyond their first license extension period. The extent to which new computational models / codes such as RELAP-7 can be used for reactor licensing / relicensing activities depends mainly upon the thoroughness with which they have been verifiedmore » and validated (V&V). This document outlines the LWRS program strategy by which RELAP-7 code V&V planning is to be accomplished. From the perspective of developing and applying thermal-hydraulic and reactivity-specific models to reactor systems, the US Nuclear Regulatory Commission (NRC) Regulatory Guide 1.203 gives key guidance to numeric model developers and those tasked with the validation of numeric models. By creating Regulatory Guide 1.203 the NRC defined a framework for development, assessment, and approval of transient and accident analysis methods. As a result, this methodology is very relevant and is recommended as the path forward for RELAP-7 V&V. However, the unique issues posed by lifetime extension will require considerations in addition to those addressed in Regulatory Guide 1.203. Some of these include prioritization of which plants / designs should be studied first, coupling modern supporting experiments to the stringent needs of new high fidelity models / codes, and scaling of aging effects.« less

  1. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a methodmore » for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less

  2. Verification and validation of RADMODL Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transportmore » of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.« less

  3. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  4. Review of heavy charged particle transport in MCNP6.2

    NASA Astrophysics Data System (ADS)

    Zieb, K.; Hughes, H. G.; James, M. R.; Xu, X. G.

    2018-04-01

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. This paper discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models' theories are included as well.

  5. Predicting Cavitation on Marine and Hydrokinetic Turbine Blades with AeroDyn V15.04

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, Robynne

    Cavitation is an important consideration in the design of marine and hydrokinetic (MHK) turbines. The National Renewable Energy Laboratory's AeroDyn performance code was originally developed for horizontal-axis wind turbines and did not have the capability to predict cavitation inception. Therefore, AeroDyn has been updated to include the ability to predict cavitation on MHK turbines based on user-specified vapor pressure and submerged depth. This report outlines a verification of the AeroDyn V15.04 performance code for MHK turbines through a comparison to publicly available performance data.

  6. Review of Heavy Charged Particle Transport in MCNP6.2

    DOE PAGES

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George; ...

    2018-01-05

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  7. MicroV Technology to Improve Transcranial Color Coded Doppler Examinations.

    PubMed

    Malferrari, Giovanni; Pulito, Giuseppe; Pizzini, Attilia Maria; Carraro, Nicola; Meneghetti, Giorgio; Sanzaro, Enzo; Prati, Patrizio; Siniscalchi, Antonio; Monaco, Daniela

    2018-05-04

    The purpose of this review is to provide an update on technology related to Transcranial Color Coded Doppler Examinations. Microvascularization (MicroV) is an emerging Power Doppler technology which can allow visualization of low and weak blood flows even at high depths, thus providing a suitable technique for transcranial ultrasound analysis. With MicroV, reconstruction of the vessel shape can be improved, without any overestimation. Furthermore, by analyzing the Doppler signal, MicroV allows a global image of the Circle of Willis. Transcranial Doppler was originally developed for the velocimetric analysis of intracranial vessels, in particular to detect stenoses and the assessment of collateral circulation. Doppler velocimetric analysis was then compared to other neuroimaging techniques, thus providing a cut-off threshold. Transcranial Color Coded Doppler sonography allowed the characterization of vessel morphology. In both Color Doppler and Power Doppler, the signal overestimated the shape of the intracranial vessels, mostly in the presence of thin vessels and high depths of study. In further neurosonology technology development efforts, attempts have been made to address morphology issues and overcome technical limitations. The use of contrast agents has helped in this regard by introducing harmonics and subtraction software, which allowed better morphological studies of vessels, due to their increased signal-to-noise ratio. Having no limitations in the learning curve, in time and contrast agent techniques, and due to its high signal-to-noise ratio, MicroV has shown great potential to obtain the best morphological definition. Copyright © 2018 by the American Society of Neuroimaging.

  8. Computer Description of the M561 Utility Truck

    DTIC Science & Technology

    1984-10-01

    GIFT Computer Code Sustainabi1ity Predictions for Army Spare Components Requirements for Combat (SPARC) 20. ABSTRACT (Caotfmia «a NWM eitim ft...used as input to the GIFT computer code to generate target vulnerability data. DO FORM V JAM 73 1473 EDITION OF I NOV 65 IS OBSOLETE Unclass i f ied...anaLyiis requires input from the Geometric Information for Targets ( GIFT ) ’ computer code. This report documents the combina- torial geometry (Com-Geom

  9. Applications of Some New Ideas on Irreversible Processes to Particular Fluids.

    DTIC Science & Technology

    1987-09-23

    616ftf SI PPLICAtTIONS OF SOME1 NEW IDEAS ON IEIYRSIKE V PROCESSES TO PARTICULAR FLUIDS(U) JOHNS HOPKINS UNIV BALTIMORE NO DEPT OF RATIONAL MECHANICS...Code) 7b. ADDRESS (City, State, and ZIP Code) Department of Rational Mechanics Baltimore, MD 21218. Boiling AFB, DC 20332 OL NAME OF FUNDING/ SPONSORING...34" - ’ ’ I Justification_. Clifford A. Truesdell By_ Professor, Program in Rational Mechanics Distribution/ Availability Codes’ !Avail and/or Dist

  10. Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.

    1990-01-01

    Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less

  11. DETECTORS AND EXPERIMENTAL METHODS: Measurement of the response function and the detection efficiency of an organic liquid scintillator for neutrons between 1 and 30 MeV

    NASA Astrophysics Data System (ADS)

    Huang, Han-Xiong; Ruan, Xi-Chao; Chen, Guo-Chang; Zhou, Zu-Ying; Li, Xia; Bao, Jie; Nie, Yang-Bo; Zhong, Qi-Ping

    2009-08-01

    The light output function of a varphi50.8 mm × 50.8 mm BC501A scintillation detector was measured in the neutron energy region of 1 to 30 MeV by fitting the pulse height (PH) spectra for neutrons with the simulations from the NRESP code at the edge range. Using the new light output function, the neutron detection efficiency was determined with two Monte-Carlo codes, NEFF and SCINFUL. The calculated efficiency was corrected by comparing the simulated PH spectra with the measured ones. The determined efficiency was verified at the near threshold region and normalized with a Proton-Recoil-Telescope (PRT) at the 8-14 MeV energy region.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckert-Gallup, Aubrey Celia; Lewis, John R.; Brooks, Dusty Marie

    This report describes the methods, results, and conclusions of the analysis of 11 scenarios defined to exercise various options available in the xLPR (Extremely Low Probability of Rupture) Version 2 .0 code. The scope of the scenario analysis is three - fold: (i) exercise the various options and components comprising xLPR v2.0 and defining each scenario; (ii) develop and exercise methods for analyzing and interpreting xLPR v2.0 outputs ; and (iii) exercise the various sampling options available in xLPR v2.0. The simulation workflow template developed during the course of this effort helps to form a basis for the application ofmore » the xLPR code to problems with similar inputs and probabilistic requirements and address in a systematic manner the three points covered by the scope.« less

  13. Complete Genome Sequence of Bacteroides ovatus V975

    PubMed Central

    Goesmann, Alexander; Carding, Simon R.

    2016-01-01

    The complete genome sequence of Bacteroides ovatus V975 was determined. The genome consists of a single circular chromosome of 6,475,296 bp containing five rRNA operons, 68 tRNA genes, and 4,959 coding genes. PMID:27908995

  14. Collaborative Research: Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsouleas, Thomas; Decyk, Viktor

    Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY, 2008]. [7] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [8] C. Huang, W. An, V. K. Decyk, W. Lu, W. B. Mori, F. S. Tsung, M. Tzoufras, S. Morshed, T. Antonsen, B. Feng, T. Katsouleas, R., A. Fonseca, S. F. Martins, J. Vieira, L. O. Silva, E. Esarey, C. G. R. Geddes, W. P. Leemans, E. Cormier-Michel, J.-L. Vay, D. L. Bruhwiler, B. Cowan, J. R. Cary, and K. Paul, "Recent results and future challenges for large scale particleion- cell simulations of plasma-based accelerator concepts," Proc. of the SciDAC 2009 Conf., San Diego, CA, June, 2009 [Journal of Physics: Conference Series, vol. 180, Institute of Physics, Bristol and Philadelphia, 2009], p. 012005. [9] J.-L. Vay, C. M. Celata, M. A. Furman, G. Penn, M. Venturini, D. P. Grote, and K. G. Sonnad, ?Update on Electron-Cloud Simulations Using the Package WARP-POSINST.? Proc. of the 2009 Particle Accelerator Conference PAC09, Vancouver, Canada, June, 2009, paper FR5RFP078.« less

  15. Applied Stochastic Eigen-Analysis

    DTIC Science & Technology

    2007-02-01

    6.4. The companion miatrix C’U’v, with respect to u, of the bivariate polynomnial L,,v given by (6.32). is det(u I - CI) = Luv (u, V)/lný (V) D...for the term u2 vJ in the polynomial Lu, (u, v). Note that the indexing for i and j starts with zero. Operation: Lulv, Luv - L. v MAT’LAF Code function...diet(ui*eye(le igtlh( Cn3))-Ciu3) function LIv3 = lliui.csl2(lnvlil v2, i ) CulI = luv2Cu( luv lu); L 3v =L v X, L2v - det(uI - C ,,), where if ( luvl

  16. Aerosol Robotic Network (AERONET) Version 3 Aerosol Optical Depth and Inversion Products

    NASA Astrophysics Data System (ADS)

    Giles, D. M.; Holben, B. N.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Sorokin, M. G.; Slutsker, I.

    2017-12-01

    The Aerosol Robotic Network (AERONET) surface-based aerosol optical depth (AOD) database has been a principal component of many Earth science remote sensing applications and modelling for more than two decades. During this time, the AERONET AOD database had utilized a semiautomatic quality assurance approach (Smirnov et al., 2000). Data quality automation developed for AERONET Version 3 (V3) was achieved by augmenting and improving upon the combination of Version 2 (V2) automatic and manual procedures to provide a more refined near real time (NRT) and historical worldwide database of AOD. The combined effect of these new changes provides a historical V3 AOD Level 2.0 data set comparable to V2 Level 2.0 AOD. The recently released V3 Level 2.0 AOD product uses Level 1.5 data with automated cloud screening and quality controls and applies pre-field and post-field calibrations and wavelength-dependent temperature characterizations. For V3, the AERONET aerosol retrieval code inverts AOD and almucantar sky radiances using a full vector radiative transfer called Successive ORDers of scattering (SORD; Korkin et al., 2017). The full vector code allows for potentially improving the real part of the complex index of refraction and the sphericity parameter and computing the radiation field in the UV (e.g., 380nm) and degree of linear depolarization. Effective lidar ratio and depolarization ratio products are also available with the V3 inversion release. Inputs to the inversion code were updated to the accommodate H2O, O3 and NO2 absorption to be consistent with the computation of V3 AOD. All of the inversion products are associated with estimated uncertainties that include the random error plus biases due to the uncertainty in measured AOD, absolute sky radiance calibration, and retrieved MODIS BRDF for snow-free and snow covered surfaces. The V3 inversion products use the same data quality assurance criteria as V2 inversions (Holben et al. 2006). The entire AERONET V3 almucantar inversion database was computed using the NASA High End Computing resources at NASA Ames Research Center and NASA Goddard Space Flight Center. In addition to a description of data products, this presentation will provide a comparison of the V3 AOD and inversion climatology comparison of the V3 Level 2.0 and V2 Level 2.0 for sites with varying aerosol types.

  17. School Dress Codes v. The First Amendment: Ganging up on Student Attire.

    ERIC Educational Resources Information Center

    Jahn, Karon L.

    Do school dress codes written with the specific purpose of limiting individual dress preferences, including dress associated with gangs, infringe on speech freedoms granted by the First Amendment of the U.S. Constitution? Although the Supreme Court has extended its protection of political speech to nonverbal acts of communication, it has…

  18. 32 CFR 634.25 - Installation traffic codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... riding in a POV whether on or off the installation. (iv) Infant/child restraint devices (car seats) will be required in POVs for children 4 years old or under and not exceeding 45 pounds in weight. (v... development and publication of installation traffic codes will be based on the following: (1) Highway Safety...

  19. 32 CFR 634.25 - Installation traffic codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... riding in a POV whether on or off the installation. (iv) Infant/child restraint devices (car seats) will be required in POVs for children 4 years old or under and not exceeding 45 pounds in weight. (v... code of the State or host nation in which the installation is located. In addition, the development and...

  20. 32 CFR 634.25 - Installation traffic codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... riding in a POV whether on or off the installation. (iv) Infant/child restraint devices (car seats) will be required in POVs for children 4 years old or under and not exceeding 45 pounds in weight. (v... code of the State or host nation in which the installation is located. In addition, the development and...

  1. 32 CFR 634.25 - Installation traffic codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... riding in a POV whether on or off the installation. (iv) Infant/child restraint devices (car seats) will be required in POVs for children 4 years old or under and not exceeding 45 pounds in weight. (v... code of the State or host nation in which the installation is located. In addition, the development and...

  2. 32 CFR 634.25 - Installation traffic codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... riding in a POV whether on or off the installation. (iv) Infant/child restraint devices (car seats) will be required in POVs for children 4 years old or under and not exceeding 45 pounds in weight. (v... development and publication of installation traffic codes will be based on the following: (1) Highway Safety...

  3. Decisions: "Carltona" and the CUC Code

    ERIC Educational Resources Information Center

    Evans, G. R.

    2006-01-01

    The Committee of University Chairman publishes a code of good practice designed, among other things, to ensure clarity about the authority on which decisions are taken on behalf of universities, subordinate domestic legislation created and the exercise of discretion regulated. In Carltona Ltd.v. Commissioners of Works [1943] 2 All ER 560 AC the…

  4. Analysis of On-Board Oxygen and Nitrogen Generation Systems for Surface Vessels.

    DTIC Science & Technology

    1983-06-01

    and Pressure Vessel Code SAE AIR 822 Oxygen for General Aviation Aircraft SAE AIR 825 Oxygen for Aircrafts SAE AIR 1059 Transportation and Maintenance...OF THE TITLE MIL-T-27730 Threaded Components MIL-P-27401 A 40 Micron Filter For Nitrogen MIL-V-33650 Internal Straight Threads ASME Code VIII Boiler

  5. Development of Milestone Schedules for Selected Logistics Support Directorate Programs. Appendix A. Part 2. Task Summaries.

    DTIC Science & Technology

    1987-09-15

    MAC; CODE NUMBER: NONE AND REPAIR PARTS AND SPECIAL TOOLS LIST (RPSTL). RESPONSIBILITY: ROY & ILS DURATION: 32.00 WORK DAYS PRE PPPL SCHEDULE...ILS DURATION: 22.00 WORK DAYS R/V PPPL SCHEDULE: DVPMARPS REVIEW AND VALIDATE PRELIMINARY PROVISIONING PARTS LIST. CODE NUMBER: NONE RESPONSIBILITY

  6. VS30 – A site-characterization parameter for use in building Codes, simplified earthquake resistant design, GMPEs, and ShakeMaps

    USGS Publications Warehouse

    Borcherdt, Roger D.

    2012-01-01

    VS30, defined as the average seismic shear-wave velocity from the surface to a depth of 30 meters, has found wide-spread use as a parameter to characterize site response for simplified earthquake resistant design as implemented in building codes worldwide. VS30 , as initially introduced by the author for the US 1994 NEHRP Building Code, provides unambiguous definitions of site classes and site coefficients for site-dependent response spectra based on correlations derived from extensive borehole logging and comparative ground-motion measurement programs in California. Subsequent use of VS30 for development of strong ground motion prediction equations (GMPEs) and measurement of extensive sets of VS borehole data have confirmed the previous empirical correlations and established correlations of SVS30 with VSZ at other depths. These correlations provide closed form expressions to predict S30 V at a large number of additional sites and further justify S30 V as a parameter to characterize site response for simplified building codes, GMPEs, ShakeMap, and seismic hazard mapping.

  7. Investigation of neutral particle dynamics in Aditya tokamak plasma with DEGAS2 code

    NASA Astrophysics Data System (ADS)

    Dey, Ritu; Ghosh, Joydeep; Chowdhuri, M. B.; Manchanda, R.; Banerjee, S.; Ramaiya, N.; Sharma, Deepti; Srinivasan, R.; Stotler, D. P.; Aditya Team

    2017-08-01

    Neutral particle behavior in Aditya tokamak, which has a circular poloidal ring limiter at one particular toroidal location, has been investigated using DEGAS2 code. The code is based on the calculation using Monte Carlo algorithms and is mainly used in tokamaks with divertor configuration. This code has been successfully implemented in Aditya tokamak with limiter configuration. The penetration of neutral hydrogen atom is studied with various atomic and molecular contributions and it is found that the maximum contribution comes from the dissociation processes. For the same, H α spectrum is also simulated and matched with the experimental one. The dominant contribution around 64% comes from molecular dissociation processes and neutral particle is generated by those processes have energy of ~2.0 eV. Furthermore, the variation of neutral hydrogen density and H α emissivity profile are analysed for various edge temperature profiles and found that there is not much changes in H α emission at the plasma edge with the variation of edge temperature (7-40 eV).

  8. Investigation of neutral particle dynamics in Aditya tokamak plasma with DEGAS2 code

    DOE PAGES

    Dey, Ritu; Ghosh, Joydeep; Chowdhuri, M. B.; ...

    2017-06-09

    Neutral particle behavior in Aditya tokamak, which has a circular poloidal ring limiter at one particular toroidal location, has been investigated using DEGAS2 code. The code is based on the calculation using Monte Carlo algorithms and is mainly used in tokamaks with divertor configuration. This code has been successfully implemented in Aditya tokamak with limiter configuration. The penetration of neutral hydrogen atom is studied with various atomic and molecular contributions and it is found that the maximum contribution comes from the dissociation processes. For the same, H α spectrum is also simulated which was matched with the experimental one. Themore » dominant contribution around 64% comes from molecular dissociation processes and neutral particle is generated by those processes have energy of ~ 2.0 eV. Furthermore, the variation of neutral hydrogen density and H α emissivity profile are analysed for various edge temperature profiles and found that there is not much changes in H α emission at the plasma edge with the variation of edge temperature (7 to 40 eV).« less

  9. Beauty is in the efficient coding of the beholder.

    PubMed

    Renoult, Julien P; Bovet, Jeanne; Raymond, Michel

    2016-03-01

    Sexual ornaments are often assumed to be indicators of mate quality. Yet it remains poorly known how certain ornaments are chosen before any coevolutionary race makes them indicative. Perceptual biases have been proposed to play this role, but known biases are mostly restricted to a specific taxon, which precludes evaluating their general importance in sexual selection. Here we identify a potentially universal perceptual bias in mate choice. We used an algorithm that models the sparseness of the activity of simple cells in the primary visual cortex (or V1) of humans when coding images of female faces. Sparseness was found positively correlated with attractiveness as rated by men and explained up to 17% of variance in attractiveness. Because V1 is adapted to process signals from natural scenes, in general, not faces specifically, our results indicate that attractiveness for female faces is influenced by a visual bias. Sparseness and more generally efficient neural coding are ubiquitous, occurring in various animals and sensory modalities, suggesting that the influence of efficient coding on mate choice can be widespread in animals.

  10. Tempest simulations of kinetic GAM mode and neoclassical turbulence

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.; Dimits, A. M.

    2007-11-01

    TEMPEST is a nonlinear five dimensional (3d2v) gyrokinetic continuum code for studies of H-mode edge plasma neoclassical transport and turbulence in real divertor geometry. The 4D TEMPEST code correctly produces frequency, collisionless damping of GAM and zonal flow with fully nonlinear Boltzmann electrons in homogeneous plasmas. For large q=4 to 9, the Tempest simulations show that a series of resonance at higher harmonics v||=φGqR0/n with n=4 become effective. The TEMPEST simulation also shows that GAM exists in edge plasma pedestal for steep density and temperature gradients, and an initial GAM relaxes to the standard neoclassical residual with neoclassical transport, rather than Rosenbluth-Hinton residual due to the presence of ion-ion collisions. The enhanced GAM damping explains experimental BES measurements on the edge q scaling of the GAM amplitude. Our 5D gyrokinetic code is built on 4D Tempest neoclassical code with extension to a fifth dimension in toroidal direction and with 3D domain decompositions. Progress on performing 5D neoclassical turbulence simulations will be reported.

  11. Status of the Space Radiation Monte Carlos Simulation Based on FLUKA and ROOT

    NASA Technical Reports Server (NTRS)

    Andersen, Victor; Carminati, Federico; Empl, Anton; Ferrari, Alfredo; Pinsky, Lawrence; Sala, Paola; Wilson, Thomas L.

    2002-01-01

    The NASA-funded project reported on at the first IWSSRR in Arona to develop a Monte-Carlo simulation program for use in simulating the space radiation environment based on the FLUKA and ROOT codes is well into its second year of development, and considerable progress has been made. The general tasks required to achieve the final goals include the addition of heavy-ion interactions into the FLUKA code and the provision of a ROOT-based interface to FLUKA. The most significant progress to date includes the incorporation of the DPMJET event generator code within FLUKA to handle heavy-ion interactions for incident projectile energies greater than 3GeV/A. The ongoing effort intends to extend the treatment of these interactions down to 10 MeV, and at present two alternative approaches are being explored. The ROOT interface is being pursued in conjunction with the CERN LHC ALICE software team through an adaptation of their existing AliROOT software. As a check on the validity of the code, a simulation of the recent data taken by the ATIC experiment is underway.

  12. Biases in detection of apparent "weekend effect" on outcome with administrative coding data: population based study of stroke.

    PubMed

    Li, Linxin; Rothwell, Peter M

    2016-05-16

     To determine the accuracy of coding of admissions for stroke on weekdays versus weekends and any impact on apparent outcome.  Prospective population based stroke incidence study and a scoping review of previous studies of weekend effects in stroke.  Primary and secondary care of all individuals registered with nine general practices in Oxfordshire, United Kingdom (OXVASC, the Oxford Vascular Study).  All patients with clinically confirmed acute stroke in OXVASC identified with multiple overlapping methods of ascertainment in 2002-14 versus all acute stroke admissions identified by hospital diagnostic and mortality coding alone during the same period.  Accuracy of administrative coding data for all patients with confirmed stroke admitted to hospital in OXVASC. Difference between rates of "false positive" or "false negative" coding for weekday and weekend admissions. Impact of inaccurate coding on apparent case fatality at 30 days in weekday versus weekend admissions. Weekend effects on outcomes in patients with confirmed stroke admitted to hospital in OXVASC and impacts of other potential biases compared with those in the scoping review.  Among 92 728 study population, 2373 episodes of acute stroke were ascertained in OXVASC, of which 826 (34.8%) mainly minor events were managed without hospital admission, 60 (2.5%) occurred out of the area or abroad, and 195 (8.2%) occurred in hospital during an admission for a different reason. Of 1292 local hospital admissions for acute stroke, 973 (75.3%) were correctly identified by administrative coding. There was no bias in distribution of weekend versus weekday admission of the 319 strokes missed by coding. Of 1693 admissions for stroke identified by coding, 1055 (62.3%) were confirmed to be acute strokes after case adjudication. Among the 638 false positive coded cases, patients were more likely to be admitted on weekdays than at weekends (536 (41.0%) v 102 (26.5%); P<0.001), partly because of weekday elective admissions after previous stroke being miscoded as new stroke episodes (267 (49.8%) v 26 (25.5%); P<0.001). The 30 day case fatality after these elective admissions was lower than after confirmed acute stroke admissions (11 (3.8%) v 233 (22.1%); P<0.001). Consequently, relative 30 day case fatality for weekend versus weekday admissions differed (P<0.001) between correctly coded acute stroke admissions and false positive coding cases. Results were consistent when only the 1327 emergency cases identified by "admission method" from coding were included, with more false positive cases with low case fatality (35 (14.7%)) being included for weekday versus weekend admissions (190 (19.5%) v 48 (13.7%), P<0.02). Among all acute stroke admissions in OXVASC, there was no imbalance in baseline stroke severity for weekends versus weekdays and no difference in case fatality at 30 days (adjusted odds ratio 0.85, 95% confidence interval 0.63 to 1.15; P=0.30) or any adverse "weekend effect" on modified Rankin score at 30 days (0.78, 0.61 to 0.99; P=0.04) or one year (0.76, 0.59 to 0.98; P=0.03) among incident strokes.  Retrospective studies of UK administrative hospital coding data to determine "weekend effects" on outcome in acute medical conditions, such as stroke, can be undermined by inaccurate coding, which can introduce biases that cannot be reliably dealt with by adjustment for case mix. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.

    2014-02-01

    A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.

  14. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  15. WWER-1000 core and reflector parameters investigation in the LR-0 reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.

    2006-07-01

    Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less

  16. EDS V25 containment vessel explosive qualification test report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudolphi, John Joseph

    2012-04-01

    The V25 containment vessel was procured by the Project Manager, Non-Stockpile Chemical Materiel (PMNSCM) as a replacement vessel for use on the P2 Explosive Destruction Systems. It is the first EDS vessel to be fabricated under Code Case 2564 of the ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel based on the Code Case is nine (9) pounds TNT-equivalent for up to 637 detonations. This limit is an increase from the 4.8 pounds TNT-equivalency rating for previous vessels. This report describes the explosive qualification tests thatmore » were performed in the vessel as part of the process for qualifying the vessel for explosive use. The tests consisted of a 11.25 pound TNT equivalent bare charge detonation followed by a 9 pound TNT equivalent detonation.« less

  17. Neutron flux measurements on a mock-up of a storage cask for high-level nuclear waste using 2.5 MeV neutrons.

    PubMed

    Suárez, H Saurí; Becker, F; Klix, A; Pang, B; Döring, T

    2018-06-07

    To store and dispose spent nuclear fuel, shielding casks are employed to reduce the emitted radiation. To evaluate the exposure of employees handling such casks, Monte Carlo radiation transport codes can be employed. Nevertheless, to assess the reliability of these codes and nuclear data, experimental checks are required. In this study, a neutron generator (NG) producing neutrons of 2.5 MeV was employed to simulate neutrons produced in spent nuclear fuel. Different configurations of shielding layers of steel and polyethylene were positioned between the target of the NG and a NE-213 detector. The results of the measurements of neutron and γ radiation and the corresponding simulations with the code MCNP6 are presented. Details of the experimental set-up as well as neutron and photon flux spectra are provided as reference points for such NG investigations with shielding structures.

  18. A new response matrix for a 6LiI scintillator BSS system

    NASA Astrophysics Data System (ADS)

    Lacerda, M. A. S.; Méndez-Villafañe, R.; Lorente, A.; Ibañez, S.; Gallego, E.; Vega-Carrillo, H. R.

    2017-10-01

    A new response matrix was calculated for a Bonner Sphere Spectrometer (BSS) with a 6 LiI(Eu) scintillator, using the Monte Carlo N-Particle radiation transport code MCNPX. Responses were calculated for 6 spheres and the bare detector, for energies varying from 1.059E(-9) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 221 energy groups. A comparison was done among the responses obtained in this work and other published elsewhere, for the same detector model. The calculated response functions were inserted in the response input file of the MAXED code and used to unfold the total and direct neutron spectra generated by the 241Am-Be source of the Universidad Politécnica de Madrid (UPM). These spectra were compared with those obtained using the same unfolding code with the Mares and Schraube matrix response.

  19. Roos and NACP-02 ion chamber perturbations and water-air stopping-power ratios for clinical electron beams for energies from 4 to 22 MeV

    NASA Astrophysics Data System (ADS)

    Bailey, M.; Shipley, D. R.; Manning, J. W.

    2015-02-01

    Empirical fits are developed for depth-compensated wall- and cavity-replacement perturbations in the PTW Roos 34001 and IBA / Scanditronix NACP-02 parallel-plate ionisation chambers, for electron beam qualities from 4 to 22 MeV for depths up to approximately 1.1 × R50,D. These are based on calculations using the Monte Carlo radiation transport code EGSnrc and its user codes with a full simulation of the linac treatment head modelled using BEAMnrc. These fits are used with calculated restricted stopping-power ratios between air and water to match measured depth-dose distributions in water from an Elekta Synergy clinical linear accelerator at the UK National Physical Laboratory. Results compare well with those from recent publications and from the IPEM 2003 electron beam radiotherapy Code of Practice.

  20. Neutron skyshine from intense 14-MeV neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, T.; Hayashi, K.; Takahashi, A.

    1985-07-01

    The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with the high-efficiency rem counter, the multisphere spectrometer, and the NE-213 scintillator in the environment surrounding an intense 14-MeV neutron source facility. The dose distribution and the energy spectra of neutrons around the facility used as a skyshine source have also been measured to enable the absolute evaluation of the skyshine effect. The skyshine effect was analyzed by two multigroup Monte Carlo codes, NIMSAC and MMCR-2, by two discrete ordinates S /sub n/ codes, ANISN and DOT3.5, and by the shield structure designmore » code for skyshine, SKYSHINE-II. The calculated results show good agreement with the measured results in absolute values. These experimental results should be useful as benchmark data for shyshine analysis and for shielding design of fusion facilities.« less

  1. Strategic and Tactical Decision-Making Under Uncertainty

    DTIC Science & Technology

    2006-01-03

    message passing algorithms. In recent work we applied this method to the problem of joint decoding of a low-density parity-check ( LDPC ) code and a partial...Joint Decoding of LDPC Codes and Partial-Response Channels." IEEE Transactions on Communications. Vol. 54, No. 7, 1149-1153, 2006. P. Pakzad and V...Michael I. Jordan PAGES U U U SAPR 20 19b. TELEPHONE NUMBER (Include area code ) 510/642-3806 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18

  2. Observations on Polar Coding with CRC-Aided List Decoding

    DTIC Science & Technology

    2016-09-01

    9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector

  3. Preparation and Use of Liposomes in Immunological Studies

    DTIC Science & Technology

    1993-01-01

    SYMBOL MFI RO W 0 E FANIZATION Division of Bioctmnistry El O9V09W399 6c. ADDRESS (City, State, and ZIP Code). DRESS(Ci State, and ZIP Code) "Walter Reed...Anuv Institute of Research 1 A Washington. DC 20307-5100 oC" 8a. NAME OF FUNDING/SPONSORING 8b. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION...12a NAME OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE (Include Area Code) 22c. OFFICE SYMBOL OD Form 1473, JUN 86 Previous editions are obsolete

  4. Study regarding the density evolution of messages and the characteristic functions associated of a LDPC code

    NASA Astrophysics Data System (ADS)

    Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.

    2017-01-01

    In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.

  5. Coded DS-CDMA Systems with Iterative Channel Estimation and no Pilot Symbols

    DTIC Science & Technology

    2010-08-01

    ar X iv :1 00 8. 31 96 v1 [ cs .I T ] 1 9 A ug 2 01 0 1 Coded DS - CDMA Systems with Iterative Channel Estimation and no Pilot Symbols Don...sequence code-division multiple-access ( DS - CDMA ) systems with quadriphase-shift keying in which channel estimation, coherent demodulation, and decoding...amplitude, phase, and the interference power spectral density (PSD) due to the combined interference and thermal noise is proposed for DS - CDMA systems

  6. Logistics Support Analysis Techniques Guide

    DTIC Science & Technology

    1985-03-15

    LANGUAGE (DATA RECORDS) FORTRAN CDC 6600 D&V FSD P/D A H REMA-RKS: Program n-s-ists of F PLIATIffIONS, approx 4000 line of coding , 3 Safegard, AN/FSC... FORTRAN IV -EW-RAK9-- The model consz.sts of IT--k-LIC- I-U-0NS: approximately 367 lines of SiNCGARS, PERSHING II coding . %.’. ~ LSA TASK INTERFACE...system supported by Computer’ Systems Command. The current version of LADEN is coded totally in FORTRAN 󈧕 for virtual memory operating system

  7. Chirp- and random-based coded ultrasonic excitation for localized blood-brain barrier opening

    PubMed Central

    Kamimura, HAS; Wang, S; Wu, S-Y; Karakatsani, ME; Acosta, C; Carneiro, AAO; Konofagou, EE

    2015-01-01

    Chirp- and random-based coded excitation methods have been proposed to reduce standing wave formation and improve focusing of transcranial ultrasound. However, no clear evidence has been shown to support the benefits of these ultrasonic excitation sequences in vivo. This study evaluates the chirp and periodic selection of random frequency (PSRF) coded-excitation methods for opening the blood-brain barrier (BBB) in mice. Three groups of mice (n=15) were injected with polydisperse microbubbles and sonicated in the caudate putamen using the chirp/PSRF coded (bandwidth: 1.5-1.9 MHz, peak negative pressure: 0.52 MPa, duration: 30 s) or standard ultrasound (frequency: 1.5 MHz, pressure: 0.52 MPa, burst duration: 20 ms, duration: 5 min) sequences. T1-weighted contrast-enhanced MRI scans were performed to quantitatively analyze focused ultrasound induced BBB opening. The mean opening volumes evaluated from the MRI were 9.38±5.71 mm3, 8.91±3.91 mm3 and 35.47 ± 5.10 mm3 for the chirp, random and regular sonications, respectively. The mean cavitation levels were 55.40±28.43 V.s, 63.87±29.97 V.s and 356.52±257.15 V.s for the chirp, random and regular sonications, respectively. The chirp and PSRF coded pulsing sequences improved the BBB opening localization by inducing lower cavitation levels and smaller opening volumes compared to results of the regular sonication technique. Larger bandwidths were associated with more focused targeting but were limited by the frequency response of the transducer, the skull attenuation and the microbubbles optimal frequency range. The coded methods could therefore facilitate highly localized drug delivery as well as benefit other transcranial ultrasound techniques that use higher pressure levels and higher precision to induce the necessary bioeffects in a brain region while avoiding damage to the surrounding healthy tissue. PMID:26394091

  8. U.S. military mental health care utilization and attrition prior to the wars in Iraq and Afghanistan.

    PubMed

    Garvey Wilson, Abigail L; Messer, Stephen C; Hoge, Charles W

    2009-06-01

    Health care utilization studies of mental disorders focus largely on the ICD-9 category 290-319, and do not generally include analysis of visits for mental health problems identified under V-code categories. Although active duty service members represent a large young adult employed population who use mental health services at similar rates as age-matched civilian populations, V-codes are used in a larger proportion of mental health visits in military mental health care settings than in civilian settings. However, the utilization of these diagnoses has not been systematically studied. The purpose of this study is to characterize outpatient behavioral health visits in military health care facilities prior to Operation Iraqi Freedom, including the use of diagnoses outside of the ICD-9 290-319 range, in order to evaluate the overall burden of mental health care. This study establishes baseline rates of mental health care utilization in military mental health clinics in 2000 and serves as a comparison for future studies of the mental health care burden of the current war. All active duty service members who received care in military outpatient clinics in 2000 (n = 1.35 million) were included. Primary diagnoses were grouped according to mental health relevance in the following categories: mental disorders (ICD-9 290-319), mental health V-code diagnoses (used primarily by behavioral health providers that were indicative of a potential mental health problem), and all other diagnoses. Rates of service utilization within behavioral health clinics were compared with rates in other outpatient clinics for each of the diagnostic groups, reported as individuals or visits per 1,000 person-years. Cox proportional hazard regression was used to produce hazard ratios as measures of association between each of the diagnostic groups and attrition from military service. Time to attrition in months was the difference between the date of military separation and the date of first clinic visit in 2000. Data were obtained from the Defense Medical Surveillance System. The total number of individuals who utilized behavioral health services in 2000 was just over 115 per 1,000 person-years, almost 12% of the military population. Out of every 1,000 person-years, 57.5 individuals received care from behavioral health providers involving an ICD-9 290-319 mental disorder diagnosis, and an additional 26.7 per 1,000 person-years received care in behavioral health clinics only for V-code diagnoses. Attrition from service was correlated with both categories of mental health-related diagnoses. After 1 year, approximately 38% of individuals who received a mental disorder diagnosis left the military, compared with 23% of those who received mental health V-code diagnoses and 14% of those who received health care for any other reason (which included well visits for routine physicals). This study establishes baseline rates of pre-war behavioral healthcare utilization among military service members, and the relationship of mental health care use and attrition from service. The research indicates that in the military population the burden of mental illness in outpatient clinics is significantly greater when V-code diagnoses are included along with conventional mental disorder diagnostic codes.

  9. Inter-comparison of Dose Distributions Calculated by FLUKA, GEANT4, MCNP, and PHITS for Proton Therapy

    NASA Astrophysics Data System (ADS)

    Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun

    2017-09-01

    The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.

  10. JASMIN: Japanese-American study of muon interactions and neutron detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroshi; /JAEA, Ibaraki; Mokhov, N.V.

    Experimental studies of shielding and radiation effects at Fermi National Accelerator Laboratory (FNAL) have been carried out under collaboration between FNAL and Japan, aiming at benchmarking of simulation codes and study of irradiation effects for upgrade and design of new high-energy accelerator facilities. The purposes of this collaboration are (1) acquisition of shielding data in a proton beam energy domain above 100GeV; (2) further evaluation of predictive accuracy of the PHITS and MARS codes; (3) modification of physics models and data in these codes if needed; (4) establishment of irradiation field for radiation effect tests; and (5) development of amore » code module for improved description of radiation effects. A series of experiments has been performed at the Pbar target station and NuMI facility, using irradiation of targets with 120 GeV protons for antiproton and neutrino production, as well as the M-test beam line (M-test) for measuring nuclear data and detector responses. Various nuclear and shielding data have been measured by activation methods with chemical separation techniques as well as by other detectors such as a Bonner ball counter. Analyses with the experimental data are in progress for benchmarking the PHITS and MARS15 codes. In this presentation recent activities and results are reviewed.« less

  11. DSMC computations of hypersonic flow separation and re-attachment in the transition to continuum regime

    NASA Astrophysics Data System (ADS)

    Prakash, Ram; Gai, Sudhir L.; O'Byrne, Sean; Brown, Melrose

    2016-11-01

    The flow over a `tick' shaped configuration is performed using two Direct Simulation Monte Carlo codes: the DS2V code of Bird and the code from Sandia National Laboratory, called SPARTA. The configuration creates a flow field, where the flow is expanded initially but then is affected by the adverse pressure gradient induced by a compression surface. The flow field is challenging in the sense that the full flow domain is comprised of localized areas spanning continuum and transitional regimes. The present work focuses on the capability of SPARTA to model such flow conditions and also towards a comparative evaluation with results from DS2V. An extensive grid adaptation study is performed using both the codes on a model with a sharp leading edge and the converged results are then compared. The computational predictions are evaluated in terms of surface parameters such as heat flux, shear stress, pressure and velocity slip. SPARTA consistently predicts higher values for these surface properties. The skin friction predictions of both the codes don't give any indication of separation but the velocity slip plots indicate an incipient separation behavior at the corner. The differences in the results are attributed towards the flow resolution at the leading edge that dictates the downstream flow characteristics.

  12. Preliminary results of Malaysian nuclear agency plasma focus (MNA-PF) as a slow focus mode device for argon and deuterium filling gas in correlation with Lee model code

    NASA Astrophysics Data System (ADS)

    Zin, M. F. M.; Baijan, A. H.; Damideh, V.; Hashim, S. A.; Sabri, R. M.

    2017-03-01

    In this work, preliminary results of MNA-PF device as a Slow Focus Mode device are presented. Four different kinds of Rogowski Coils which have been designed and constructed for dI/dt signals measurements show that response frequency of Rogowski Coil can affect signal time resolution and delay which can change the discharge circuit inductance. Experimental results for 10 to 20 mbar Deuterium and 0.5 mbar to 6 mbar Argon which are captured by 630 MHz Rogowski coil in correlation with Lee Model Code are presented. Proper current fitting using Lee Model Code shows that the speed factor for MNA-PF device working with 13 mbar Deuterium is 30 kA/cm.torr1/2 at 14 kV which indicates that the device is operating at slow focus mode. Model parameters fm and fmr predicted by Lee Model Code during current fitting for 13 mbar Deuterium at 14kV were 0.025 and 0.31 respectively. Microspec-4 Neutron Detector was used to obtain the dose rate which was found to be maximum at 4.78 uSv/hr and also the maximum neutron yield calculated from Lee Model Code is 7.5E+03 neutron per shot.

  13. Transcriptome Analysis of Scorpion Species Belonging to the Vaejovis Genus

    PubMed Central

    Quintero-Hernández, Verónica; Ramírez-Carreto, Santos; Romero-Gutiérrez, María Teresa; Valdez-Velázquez, Laura L.; Becerril, Baltazar; Possani, Lourival D.; Ortiz, Ernesto

    2015-01-01

    Scorpions belonging to the Buthidae family have traditionally drawn much of the biochemist’s attention due to the strong toxicity of their venoms. Scorpions not toxic to mammals, however, also have complex venoms. They have been shown to be an important source of bioactive peptides, some of them identified as potential drug candidates for the treatment of several emerging diseases and conditions. It is therefore important to characterize the large diversity of components found in the non-Buthidae venoms. As a contribution to this goal, this manuscript reports the construction and characterization of cDNA libraries from four scorpion species belonging to the Vaejovis genus of the Vaejovidae family: Vaejovis mexicanus, V. intrepidus, V. subcristatus and V. punctatus. Some sequences coding for channel-acting toxins were found, as expected, but the main transcribed genes in the glands actively producing venom were those coding for non disulfide-bridged peptides. The ESTs coding for putative channel-acting toxins, corresponded to sodium channel β toxins, to members of the potassium channel-acting α or κ families, and to calcium channel-acting toxins of the calcin family. Transcripts for scorpine-like peptides of two different lengths were found, with some of the species coding for the two kinds. One sequence coding for La1-like peptides, of yet unknown function, was found for each species. Finally, the most abundant transcripts corresponded to peptides belonging to the long chain multifunctional NDBP-2 family and to the short antimicrobials of the NDBP-4 family. This apparent venom composition is in correspondence with the data obtained to date for other non-Buthidae species. Our study constitutes the first approach to the characterization of the venom gland transcriptome for scorpion species belonging to the Vaejovidae family. PMID:25659089

  14. Transcriptome analysis of scorpion species belonging to the Vaejovis genus.

    PubMed

    Quintero-Hernández, Verónica; Ramírez-Carreto, Santos; Romero-Gutiérrez, María Teresa; Valdez-Velázquez, Laura L; Becerril, Baltazar; Possani, Lourival D; Ortiz, Ernesto

    2015-01-01

    Scorpions belonging to the Buthidae family have traditionally drawn much of the biochemist's attention due to the strong toxicity of their venoms. Scorpions not toxic to mammals, however, also have complex venoms. They have been shown to be an important source of bioactive peptides, some of them identified as potential drug candidates for the treatment of several emerging diseases and conditions. It is therefore important to characterize the large diversity of components found in the non-Buthidae venoms. As a contribution to this goal, this manuscript reports the construction and characterization of cDNA libraries from four scorpion species belonging to the Vaejovis genus of the Vaejovidae family: Vaejovis mexicanus, V. intrepidus, V. subcristatus and V. punctatus. Some sequences coding for channel-acting toxins were found, as expected, but the main transcribed genes in the glands actively producing venom were those coding for non disulfide-bridged peptides. The ESTs coding for putative channel-acting toxins, corresponded to sodium channel β toxins, to members of the potassium channel-acting α or κ families, and to calcium channel-acting toxins of the calcin family. Transcripts for scorpine-like peptides of two different lengths were found, with some of the species coding for the two kinds. One sequence coding for La1-like peptides, of yet unknown function, was found for each species. Finally, the most abundant transcripts corresponded to peptides belonging to the long chain multifunctional NDBP-2 family and to the short antimicrobials of the NDBP-4 family. This apparent venom composition is in correspondence with the data obtained to date for other non-Buthidae species. Our study constitutes the first approach to the characterization of the venom gland transcriptome for scorpion species belonging to the Vaejovidae family.

  15. A cautionary tale: the non-causal association between type 2 diabetes risk SNP, rs7756992, and levels of non-coding RNA, CDKAL1-v1.

    PubMed

    Locke, Jonathan M; Wei, Fan-Yan; Tomizawa, Kazuhito; Weedon, Michael N; Harries, Lorna W

    2015-04-01

    Intronic single nucleotide polymorphisms (SNPs) in the CDKAL1 gene are associated with risk of developing type 2 diabetes. A strong correlation between risk alleles and lower levels of the non-coding RNA, CDKAL1-v1, has recently been reported in whole blood extracted from Japanese individuals. We sought to replicate this association in two independent cohorts: one using whole blood from white UK-resident individuals, and one using a collection of human pancreatic islets, a more relevant tissue type to study with respect to the aetiology of diabetes. Levels of CDKAL1-v1 were measured by real-time PCR using RNA extracted from human whole blood (n = 70) and human pancreatic islets (n = 48). Expression with respect to genotype was then determined. In a simple linear regression model, expression of CDKAL1-v1 was associated with the lead type 2 diabetes-associated SNP, rs7756992, in whole blood and islets. However, these associations were abolished or substantially reduced in multiple regression models taking into account rs9366357 genotype: a moderately linked SNP explaining a much larger amount of the variation in CDKAL1-v1 levels, but not strongly associated with risk of type 2 diabetes. Contrary to previous findings, we provide evidence against a role for dysregulated expression of CDKAL1-v1 in mediating the association between intronic SNPs in CDKAL1 and susceptibility to type 2 diabetes. The results of this study illustrate how caution should be exercised when inferring causality from an association between disease-risk genotype and non-coding RNA expression.

  16. KAOS/LIB-V: A library of nuclear response functions generated by KAOS-V code from ENDF/B-V and other data files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farawila, Y.; Gohar, Y.; Maynard, C.

    1989-04-01

    KAOS/LIB-V: A library of processed nuclear responses for neutronics analyses of nuclear systems has been generated. The library was prepared using the KAOS-V code and nuclear data from ENDF/B-V. The library includes kerma (kinetic energy released in materials) factors and other nuclear response functions for all materials presently of interest in fusion and fission applications for 43 nonfissionable and 15 fissionable isotopes and elements. The nuclear response functions include gas production and tritium-breeding functions, and all important reaction cross sections. KAOS/LIB-V employs the VITAMIN-E weighting function and energy group structure of 174 neutron groups. Auxiliary nuclear data bases, e.g., themore » Japanese evaluated nuclear data library JENDL-2 were used as a source of isotopic cross sections when these data are not provided in ENDF/B-V files for a natural element. These are needed mainly to estimate average quantities such as effective Q-values for the natural element. This analysis of local energy deposition was instrumental in detecting and understanding energy balance deficiencies and other problems in the ENDF/B-V data. Pertinent information about the library and a graphical display of the main nuclear response functions for all materials in the library are given. 35 refs.« less

  17. Radial dependence of lineal energy distribution of 290-MeV/u carbon and 500-MeV/u iron ion beams using a wall-less tissue-equivalent proportional counter

    PubMed Central

    Tsuda, Shuichi; Sato, Tatsuhiko; Watanabe, Ritsuko; Takada, Masashi

    2015-01-01

    Using a wall-less tissue-equivalent proportional counter for a 0.72-μm site in tissue, we measured the radial dependence of the lineal energy distribution, yf(y), of 290-MeV/u carbon ions and 500-MeV/u iron ion beams. The measured yf(y) distributions and the dose-mean of y, y¯D, were compared with calculations performed with the track structure simulation code TRACION and the microdosimetric function of the Particle and Heavy Ion Transport code System (PHITS). The values of the measured y¯D were consistent with calculated results within an error of 2%, but differences in the shape of yf(y) were observed for iron ion irradiation. This result indicates that further improvement of the calculation model for yf(y) distribution in PHITS is needed for the analytical function that describes energy deposition by delta rays, particularly for primary ions having linear energy transfer in excess of a few hundred keV μm−1. PMID:25210053

  18. Cyclotron production of 48V via natTi(d,x)48V nuclear reaction; a promising radionuclide

    NASA Astrophysics Data System (ADS)

    Usman, A. R.; Khandaker, M. U.; Haba, H.

    2017-06-01

    In this experimental work, we studied the excitation function of natTi(d,x)48V nuclear reactions from 24 MeV down to threshold energy. Natural titanium foils were arranged in the popular stacked-foil method and activated with deuteron beam generated from an AVF cyclotron at RIKEN, Wako, Japan. The emitted γ activities from the activated foils were measured using an offline γ-ray spectrometry. The present results were analyzed, compared with earlier published experimental data and also with the evaluated data of Talys code. Our new measured data agree with some of the earlier reported experimental data while a partial agreement is found with the evaluated theoretical data. In addition to the use of 48V as a beam intensity monitor, recent studies indicate its potentials as calibrating source in PET cameras and also as a (radioactive) label for medical applications. The results are also expected to further enrich the experimental database and also to play an important role in nuclear reactions model codes design.

  19. Shielding calculations for industrial 5/7.5MeV electron accelerators using the MCNP Monte Carlo Code

    NASA Astrophysics Data System (ADS)

    Peri, Eyal; Orion, Itzhak

    2017-09-01

    High energy X-rays from accelerators are used to irradiate food ingredients to prevent growth and development of unwanted biological organisms in food, and by that extend the shelf life of the products. The production of X-rays is done by accelerating 5 MeV electrons and bombarding them into a heavy target (high Z). Since 2004, the FDA has approved using 7.5 MeV energy, providing higher production rates with lower treatments costs. In this study we calculated all the essential data needed for a straightforward concrete shielding design of typical food accelerator rooms. The following evaluation is done using the MCNP Monte Carlo code system: (1) Angular dependence (0-180°) of photon dose rate for 5 MeV and 7.5 MeV electron beams bombarding iron, aluminum, gold, tantalum, and tungsten targets. (2) Angular dependence (0-180°) spectral distribution simulations of bremsstrahlung for gold, tantalum, and tungsten bombarded by 5 MeV and 7.5 MeV electron beams. (3) Concrete attenuation calculations in several photon emission angles for the 5 MeV and 7.5 MeV electron beams bombarding a tantalum target. Based on the simulation, we calculated the expected increase in dose rate for facilities intending to increase the energy from 5 MeV to 7.5 MeV, and the concrete width needed to be added in order to keep the existing dose rate unchanged.

  20. Connectivity Reveals Sources of Predictive Coding Signals in Early Visual Cortex During Processing of Visual Optic Flow.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2017-05-01

    Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Validation and verification of the laser range safety tool (LRST)

    NASA Astrophysics Data System (ADS)

    Kennedy, Paul K.; Keppler, Kenneth S.; Thomas, Robert J.; Polhamus, Garrett D.; Smith, Peter A.; Trevino, Javier O.; Seaman, Daniel V.; Gallaway, Robert A.; Crockett, Gregg A.

    2003-06-01

    The U.S. Dept. of Defense (DOD) is currently developing and testing a number of High Energy Laser (HEL) weapons systems. DOD range safety officers now face the challenge of designing safe methods of testing HEL's on DOD ranges. In particular, safety officers need to ensure that diffuse and specular reflections from HEL system targets, as well as direct beam paths, are contained within DOD boundaries. If both the laser source and the target are moving, as they are for the Airborne Laser (ABL), a complex series of calculations is required and manual calculations are impractical. Over the past 5 years, the Optical Radiation Branch of the Air Force Research Laboratory (AFRL/HEDO), the ABL System Program Office, Logicon-RDA, and Northrup-Grumman, have worked together to develop a computer model called teh Laser Range Safety Tool (LRST), specifically designed for HEL reflection hazard analyses. The code, which is still under development, is currently tailored to support the ABL program. AFRL/HEDO has led an LRST Validation and Verification (V&V) effort since 1998, in order to determine if code predictions are accurate. This paper summarizes LRST V&V efforts to date including: i) comparison of code results with laboratory measurements of reflected laser energy and with reflection measurements made during actual HEL field tests, and ii) validation of LRST's hazard zone computations.

  2. Adaptive Hybrid Picture Coding. Volume 2.

    DTIC Science & Technology

    1985-02-01

    ooo5 V.a Measurement Vector ..eho..............57 V.b Size Variable o .entroi* Vector .......... .- 59 V * c Shape Vector .Ř 0-60o oe 6 I V~d...the Program for the Adaptive Line of Sight Method .i.. 18.. o ... .... .... 1 B Details of the Feature Vector FormationProgram .. o ...oo..-....- .122 C ...shape recognition is analogous to recognition of curves in space. Therefore, well known concepts and theorems from differential geometry can be 34 . o

  3. Modification and benchmarking of MCNP for low-energy tungsten spectra.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-12-01

    The MCNP Monte Carlo radiation transport code was modified for diagnostic medical physics applications. In particular, the modified code was thoroughly benchmarked for the production of polychromatic tungsten x-ray spectra in the 30-150 kV range. Validating the modified code for coupled electron-photon transport with benchmark spectra was supplemented with independent electron-only and photon-only transport benchmarks. Major revisions to the code included the proper treatment of characteristic K x-ray production and scoring, new impact ionization cross sections, and new bremsstrahlung cross sections. Minor revisions included updated photon cross sections, electron-electron bremsstrahlung production, and K x-ray yield. The modified MCNP code is benchmarked to electron backscatter factors, x-ray spectra production, and primary and scatter photon transport.

  4. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  5. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  6. Characterization of a hybrid target multi-keV x-ray source by a multi-parameter statistical analysis of titanium K-shell emission

    DOE PAGES

    Primout, M.; Babonneau, D.; Jacquet, L.; ...

    2015-11-10

    We studied the titanium K-shell emission spectra from multi-keV x-ray source experiments with hybrid targets on the OMEGA laser facility. Using the collisional-radiative TRANSPEC code, dedicated to K-shell spectroscopy, we reproduced the main features of the detailed spectra measured with the time-resolved MSPEC spectrometer. We developed a general method to infer the N e, T e and T i characteristics of the target plasma from the spectral analysis (ratio of integrated Lyman-α to Helium-α in-band emission and the peak amplitude of individual line ratios) of the multi-keV x-ray emission. Finally, these thermodynamic conditions are compared to those calculated independently bymore » the radiation-hydrodynamics transport code FCI2.« less

  7. Excitation functions of the natCr(p,x)44Ti, 56Fe(p,x)44Ti, natNi(p,x)44Ti and 93Nb(p,x)44Ti reactions at energies up to 2.6 GeV

    NASA Astrophysics Data System (ADS)

    Titarenko, Yu. E.; Batyaev, V. F.; Pavlov, K. V.; Titarenko, A. Yu.; Zhivun, V. M.; Chauzova, M. V.; Balyuk, S. A.; Bebenin, P. V.; Ignatyuk, A. V.; Mashnik, S. G.; Leray, S.; Boudard, A.; David, J. C.; Mancusi, D.; Cugnon, J.; Yariv, Y.; Nishihara, K.; Matsuda, N.; Kumawat, H.; Stankovskiy, A. Yu.

    2016-06-01

    The paper presents the measured cumulative yields of 44Ti for natCr, 56Fe, natNi and 93Nb samples irradiated by protons at the energy range 0.04-2.6 GeV. The obtained excitation functions are compared with calculations of the well-known codes: ISABEL, Bertini, INCL4.2+ABLA, INCL4.5+ABLA07, PHITS, CASCADE07 and CEM03.02. The predictive power of these codes regarding the studied nuclides is analyzed.

  8. Computed secondary-particle energy spectra following nonelastic neutron interactions with C-12 for E(n) between 15 and 60 MeV: Comparisons of results from two calculational methods

    NASA Astrophysics Data System (ADS)

    Dickens, J. K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d(sigma)/dE, following nonelastic neutron interactions with C-12 for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed.

  9. Monte Carlo track structure for radiation biology and space applications

    NASA Technical Reports Server (NTRS)

    Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.

    2001-01-01

    Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.

  10. Integration of design, structural, thermal and optical analysis: And user's guide for structural-to-optical translator (PATCOD)

    NASA Technical Reports Server (NTRS)

    Amundsen, R. M.; Feldhaus, W. S.; Little, A. D.; Mitchum, M. V.

    1995-01-01

    Electronic integration of design and analysis processes was achieved and refined at Langley Research Center (LaRC) during the development of an optical bench for a laser-based aerospace experiment. Mechanical design has been integrated with thermal, structural and optical analyses. Electronic import of the model geometry eliminates the repetitive steps of geometry input to develop each analysis model, leading to faster and more accurate analyses. Guidelines for integrated model development are given. This integrated analysis process has been built around software that was already in use by designers and analysis at LaRC. The process as currently implemented used Pro/Engineer for design, Pro/Manufacturing for fabrication, PATRAN for solid modeling, NASTRAN for structural analysis, SINDA-85 and P/Thermal for thermal analysis, and Code V for optical analysis. Currently, the only analysis model to be built manually is the Code V model; all others can be imported for the Pro/E geometry. The translator from PATRAN results to Code V optical analysis (PATCOD) was developed and tested at LaRC. Directions for use of the translator or other models are given.

  11. Complete Genome Sequence of Bacteroides ovatus V975.

    PubMed

    Wegmann, Udo; Goesmann, Alexander; Carding, Simon R

    2016-12-01

    The complete genome sequence of Bacteroides ovatus V975 was determined. The genome consists of a single circular chromosome of 6,475,296 bp containing five rRNA operons, 68 tRNA genes, and 4,959 coding genes. Copyright © 2016 Wegmann et al.

  12. ^235U(n,xnγ) Excitation Function Measurements Using Gamma-Ray Spectroscopy at GEANIE

    NASA Astrophysics Data System (ADS)

    Younes, W.; Becker, J. A.; Bernstein, L. A.; Archer, D. E.; Stoyer, M. A.; Hauschild, K.; Drake, D. M.; Johns, G. D.; Nelson, R. O.; Wilburn, S. W.

    1998-04-01

    The ^235U(n,xn) cross sections (where x<=2) have previously been measured at several incident neutron energies. In particular, the ^235U(n,2n) cross section has been measured(J. Frehaut et al.), Nucl. Sci. Eng. 74,29 (1980). reliably up to peak near E_n≈ 11 MeV, but not along the tail which is predicted by some(M.B. Chadwick, private communication.) codes to yield significant (e.g. >= 10% of peak) cross section out to E_n≈ 30 MeV. We have measured gamma-ray spectra resulting from ^235U(n,xn) as a function of neutron energy in the range 1 MeV <~ En <~ 200 MeV using the GEANIE spectrometer at the LANSCE/WNR ``white'' neutron source. We will present excitation functions for the de-excitation gamma rays in ^234,235U compared to predictions from the Hauser-Feshbach-preequilibrium code GNASH(M.B. Chadwick and P.G. Young, Los Alamos Report No. LA-UR-93-104, 1993.).

  13. Neutron spectrometry in a mixed field of neutrons and protons with a phoswich neutron detector Part I: response functions for photons and neutrons of the phoswich neutron detector

    NASA Astrophysics Data System (ADS)

    Takada, M.; Taniguchi, S.; Nakamura, T.; Nakao, N.; Uwamino, Y.; Shibata, T.; Fujitaka, K.

    2001-06-01

    We have developed a phoswich neutron detector consisting of an NE213 liquid scintillator surrounded by an NE115 plastic scintillator to distinguish photon and neutron events in a charged-particle mixed field. To obtain the energy spectra by unfolding, the response functions to neutrons and photons were obtained by the experiment and calculation. The response functions to photons were measured with radionuclide sources, and were calculated with the EGS4-PRESTA code. The response functions to neutrons were measured with a white neutron source produced by the bombardment of 135 MeV protons onto a Be+C target using a TOF method, and were calculated with the SCINFUL code, which we revised in order to calculate neutron response functions up to 135 MeV. Based on these experimental and calculated results, response matrices for photons up to 20 MeV and neutrons up to 132 MeV could finally be obtained.

  14. Extension of the energy range of experimental activation cross-sections data of deuteron induced nuclear reactions on indium up to 50MeV.

    PubMed

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-11-01

    The energy range of our earlier measured activation cross-sections data of longer-lived products of deuteron induced nuclear reactions on indium were extended from 40MeV up to 50MeV. The traditional stacked foil irradiation technique and non-destructive gamma spectrometry were used. No experimental data were found in literature for this higher energy range. Experimental cross-sections for the formation of the radionuclides (113,110)Sn, (116m,115m,114m,113m,111,110g,109)In and (115)Cd are reported in the 37-50MeV energy range, for production of (110)Sn and (110g,109)In these are the first measurements ever. The experimental data were compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS 1.6 nuclear model code as listed in the on-line library TENDL-2014. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. SecureQEMU: Emulation-Based Software Protection Providing Encrypted Code Execution and Page Granularity Code Signing

    DTIC Science & Technology

    2008-12-01

    SHA256 DIGEST LENGTH) ) ; peAddSection(&sF i l e , " . S i g S t u b " , dwStubSecSize , dwStubSecSize ) ; 169 peSecure(&sF i l e , deqAddrSize...deqAuthPageAddrSize . s i z e ( ) /2) ∗ (8 + SHA256 DIGEST LENGTH) ) + 16 ; bCode [ 3 4 ] = ( ( char∗)&dwSize ) [ 0 ] ; bCode [ 3 5 ] = ( ( char∗)&dwSize ) [ 1...2) ∗ (8 + SHA256 DIGEST LENGTH... ) ) ; AES KEY aesKey ; unsigned char i v s a l t [ 1 6 ] , temp iv [ 1 6 ] ; 739 unsigned char ∗key

  16. Pacific Northwest (PNW) Hydrologic Landscape (HL) polygons and HL code

    EPA Pesticide Factsheets

    A five-letter hydrologic landscape code representing five indices of hydrologic form that are related to hydrologic function: climate, seasonality, aquifer permeability, terrain, and soil permeability. Each hydrologic assessment unit is classified by one of the 81 different five-letter codes representing these indices. Polygon features in this dataset were created by aggregating (dissolving boundaries between) adjacent, similarly-coded hydrologic assessment units. Climate Classes: V-Very wet, W-Wet, M-Moist, D-Dry, S-Semiarid, A-Arid. Seasonality Sub-Classes: w-Fall or winter, s-Spring. Aquifer Permeability Classes: H-High, L-Low. Terrain Classes: M-Mountain, T-Transitional, F-Flat. Soil Permeability Classes: H-High, L-Low.

  17. Final Evaluation of MIPS M/500

    DTIC Science & Technology

    1987-11-01

    recognizing common subexpressions by changing the code to read: acke (n,m) If (, - 0) return *+I; return a ker(n-1, 0 ? 1 aaker (n,.-1)); I the total code...INSTITUTE JPO PTTTSBURCH. PA 15213 N/A N/A N/O 11 TITLE (Inciude Security Class.iication) Final Evaluation of MIPS M/500 12. PERSONAL AUTHOR(S) Daniel V

  18. Sticks and Stones: Why First Amendment Absolutism Fails When Applied to Campus Harassment Codes.

    ERIC Educational Resources Information Center

    Lumsden, Linda

    This paper analyzes how absolutist arguments against campus harassment codes violate the spirit of the first amendment, examining in particular the United States Supreme Court ruling in "RAV v. St. Paul." The paper begins by tracing the current development of first amendment doctrine, analyzing its inadequacy in the campus hate speech…

  19. Hybrid Hard and Soft Decision Decoding of Reed-Solomon Codes for M-ary Frequency-Shift Keying

    DTIC Science & Technology

    2010-06-01

    Reed-Solomon (RS) coding, Orthogonal signaling, Additive White Gaussian Noise (AWGN), Pulse-Noise Interference (PNI), coherent detection, noncoherent ...Coherent Demodulation of MFSK ....................................................10 2. Noncoherent Demodulation of MFSK...62 V. PERFORMANCE SIMULATION AND ANALYSIS OF MFSK WITH RS ENCODING, HYBRID HD SD DECODING, AND NONCOHERENT DEMODULATION IN AWGN

  20. 76 FR 6503 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Granting Approval of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-04

    ... specifically requires the adoption of a code of ethics by an investment advisor to include, at a minimum: (i... persons to report any violations of the code of ethics promptly to the chief compliance officer (``CCO... of ethics; and (v) provisions requiring the investment advisor to provide each of the supervised...

  1. Verification and Validation (V&V) Methodologies for Multiphase Turbulent and Explosive Flows. V&V Case Studies of Computer Simulations from Los Alamos National Laboratory GMFIX codes

    NASA Astrophysics Data System (ADS)

    Dartevelle, S.

    2006-12-01

    Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements; hence, volcanic explosive phenomenology is inadequately constrained in terms of initial and inflow conditions. Consequently, little to no real-time data exist to Verify and Validate computer codes developed to model these geophysical events as a whole. However, code Verification and Validation remains a necessary step, particularly when volcanologists use numerical data for mitigation of volcanic hazards as more often performed nowadays. The Verification and Validation (V&V) process formally assesses the level of 'credibility' of numerical results produced within a range of specific applications. The first step, Verification, is 'the process of determining that a model implementation accurately represents the conceptual description of the model', which requires either exact analytical solutions or highly accurate simplified experimental data. The second step, Validation, is 'the process of determining the degree to which a model is an accurate representation of the real world', which requires complex experimental data of the 'real world' physics. The Verification step is rather simple to formally achieve, while, in the 'real world' explosive volcanism context, the second step, Validation, is about impossible. Hence, instead of validating computer code against the whole large-scale unconstrained volcanic phenomenology, we rather suggest to focus on the key physics which control these volcanic clouds, viz., momentum-driven supersonic jets and multiphase turbulence. We propose to compare numerical results against a set of simple but well-constrained analog experiments, which uniquely and unambiguously represent these two key-phenomenology separately. Herewith, we use GMFIX (Geophysical Multiphase Flow with Interphase eXchange, v1.62), a set of multiphase- CFD FORTRAN codes, which have been recently redeveloped to meet the strict Quality Assurance, verification, and validation requirements from the Office of Civilian Radioactive Waste Management of the US Dept of Energy. GMFIX solves Navier-Stokes and energy partial differential equations for each phase with appropriate turbulence and interfacial coupling between phases. For momentum-driven single- to multi-phase underexpanded jets, the position of the first Mach disk is known empirically as a function of both the pressure ratio, K, and the particle mass fraction, Phi at the nozzle. Namely, the higher K, the further downstream the Mach disk and the higher Phi, the further upstream the first Mach disk. We show that GMFIX captures these two essential features. In addition, GMFIX displays all the properties found in these jets, such as expansion fans, incident and reflected shocks, and subsequent downstream mach discs, which make this code ideal for further investigations of equivalent volcanological phenomena. One of the other most challenging aspects of volcanic phenomenology is the multiphase nature of turbulence. We also validated GMFIX in comparing the velocity profiles and turbulence quantities against well constrained analog experiments. The velocity profiles agree with the analog ones as well as these of production of turbulent quantities. Overall, the Verification and the Validation experiments although inherently challenging suggest GMFIX captures the most essential dynamical properties of multiphase and supersonic flows and jets.

  2. Contour Curvature As an Invariant Code for Objects in Visual Area V4

    PubMed Central

    Pasupathy, Anitha

    2016-01-01

    Size-invariant object recognition—the ability to recognize objects across transformations of scale—is a fundamental feature of biological and artificial vision. To investigate its basis in the primate cerebral cortex, we measured single neuron responses to stimuli of varying size in visual area V4, a cornerstone of the object-processing pathway, in rhesus monkeys (Macaca mulatta). Leveraging two competing models for how neuronal selectivity for the bounding contours of objects may depend on stimulus size, we show that most V4 neurons (∼70%) encode objects in a size-invariant manner, consistent with selectivity for a size-independent parameter of boundary form: for these neurons, “normalized” curvature, rather than “absolute” curvature, provided a better account of responses. Our results demonstrate the suitability of contour curvature as a basis for size-invariant object representation in the visual cortex, and posit V4 as a foundation for behaviorally relevant object codes. SIGNIFICANCE STATEMENT Size-invariant object recognition is a bedrock for many perceptual and cognitive functions. Despite growing neurophysiological evidence for invariant object representations in the primate cortex, we still lack a basic understanding of the encoding rules that govern them. Classic work in the field of visual shape theory has long postulated that a representation of objects based on information about their bounding contours is well suited to mediate such an invariant code. In this study, we provide the first empirical support for this hypothesis, and its instantiation in single neurons of visual area V4. PMID:27194333

  3. a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.

    2009-03-01

    Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.

  4. MESHMAKER (MM) V1.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MORIDIS, GEORGE

    2016-05-02

    MeshMaker v1.5 is a code that describes the system geometry and discretizes the domain in problems of flow and transport through porous and fractured media that are simulated using the TOUGH+ [Moridis and Pruess, 2014] or TOUGH2 [Pruess et al., 1999; 2012] families of codes. It is a significantly modified and drastically enhanced version of an earlier simpler facility that was embedded in the TOUGH2 codes [Pruess et al., 1999; 2012], from which it could not be separated. The code (MeshMaker.f90) is a stand-alone product written in FORTRAN 95/2003, is written according to the tenets of Object-Oriented Programming, has amore » modular structure and can perform a number of mesh generation and processing operations. It can generate two-dimensional radially symmetric (r,z) meshes, and one-, two-, and three-dimensional rectilinear (Cartesian) grids in (x,y,z). The code generates the file MESH, which includes all the elements and connections that describe the discretized simulation domain and conforming to the requirements of the TOUGH+ and TOUGH2 codes. Multiple-porosity processing for simulation of flow in naturally fractured reservoirs can be invoked by means of a keyword MINC, which stands for Multiple INteracting Continua. The MINC process operates on the data of the primary (porous medium) mesh as provided on disk file MESH, and generates a secondary mesh containing fracture and matrix elements with identical data formats on file MINC.« less

  5. Nucleotide sequence and structural organization of the human vasopressin pituitary receptor (V3) gene.

    PubMed

    René, P; Lenne, F; Ventura, M A; Bertagna, X; de Keyzer, Y

    2000-01-04

    In the pituitary, vasopressin triggers ACTH release through a specific receptor subtype, termed V3 or V1b. We cloned the V3 cDNA and showed that its expression was almost exclusive to pituitary corticotrophs and some corticotroph tumors. To study the determinants of this tissue specificity, we have now cloned the gene for the human (h) V3 receptor and characterized its structure. It is composed of two exons, spanning 10kb, with the coding region interrupted between transmembrane domains 6 and 7. We established that the transcription initiation site is located 498 nucleotides upstream of the initiator codon and showed that two polyadenylation sites may be used, while the most frequent is the most downstream. Sequence analysis of the promoter region showed no TATA box but identified consensus binding motifs for Sp1, CREB, and half sites of the estrogen receptor binding site. However comparison with another corticotroph-specific gene, proopiomelanocortin, did not identify common regulatory elements in the two promoters except for a short GC-rich region. Unexpectedly, hV3 gene analysis revealed that a formerly cloned 'artifactual' hV3 cDNA indeed corresponded to a spliced antisense transcript, overlapping the 5' part of the coding sequence in exon 1 and the promoter region. This transcript, hV3rev, was detected in normal pituitary and in many corticotroph tumors expressing hV3 sense mRNA and may therefore play a role in hV3 gene expression.

  6. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  7. Pluridirectional High-Energy Agile Scanning Electron Radiotherapy (PHASER): Extremely Rapid Treatment for Early Lung Cancer

    DTIC Science & Technology

    2014-06-01

    BEAMnrc Monte Carlo (MC) codes were used to simulate 50- 150MeV VHEE beam dose deposition and its effects on steel and titanium (Ti) heterogeneities in a...performed on water-only geometry and water with segmented prostheses ( steel and Ti) geometries with 100MeV and 150MeV beams...8 Results: 100MeV PDD 5cm behind steel /Ti heterogeneity was 51% less than in the

  8. Computing the cross sections of nuclear reactions with nuclear clusters emission for proton energies between 30 MeV and 2.6 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korovin, Yu. A.; Maksimushkina, A. V., E-mail: AVMaksimushkina@mephi.ru; Frolova, T. A.

    2016-12-15

    The cross sections of nuclear reactions involving emission of clusters of light nuclei in proton collisions with a heavy-metal target are computed for incident-proton energies between 30 MeV and 2.6 GeV. The calculation relies on the ALICE/ASH and CASCADE/INPE computer codes. The parameters determining the pre-equilibrium cluster emission are varied in the computation.

  9. Comparison of GATE/GEANT4 with EGSnrc and MCNP for electron dose calculations at energies between 15 keV and 20 MeV.

    PubMed

    Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V

    2011-02-07

    The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.

  10. Basic physical processes and reduced models for plasma detachment

    NASA Astrophysics Data System (ADS)

    Stangeby, P. C.

    2018-04-01

    The divertor of a tokamak reactor will have to satisfy a number of critical constraints, the first of which is that the divertor targets not fail due to excessive heating or sputter-erosion. This paramount constraint of target survival defines the operating window for the principal plasma properties at the divertor target, the density n t and temperature, T t. In particular T et < 10 eV is shown to be required. Code and experimental studies show that the pressure–momentum loss by the plasma that occurs along flux tubes in the edge, between the divertor entrance and target, (i) correlates strongly with T et, and (ii) begins to increase as T et falls below 10 eV, becoming very strong by 1 eV. The transition between the high-recycling regime and the detached divertor regime has therefore been defined here to occur when T et < 10 eV. Simple analytic models are developed (i) to relate (T t, n t) to the controlling conditions ‘upstream’ e.g. at the divertor entrance, and (ii) in turn to relate (T t, n t) to other important divertor quantities including (a) the required level of radiative cooling in the divertor, and (b) the ion flux to the target in the presence of volumetric loss of particles, momentum and power in the divertor. The 2 Point Model, 2PM, is a widely used analytic model for relating (T t, n t) to the controlling upstream conditions. The 2PM is derived here for various levels of complexity regarding the effects included. Analytic models of divertor detachment provide valuable insight and useful approximations, but more complete modeling requires the use of edge codes such as EDGE2D, SOLPS, SONIC, UEDGE, etc. Edge codes have grown to become quite sophisticated and now constitute, in effect, ‘code-experiments’ that—just as for actual experiments—can benefit from interpretation in terms of simple conceptual frameworks. 2 Point Model Formatting, 2PMF, of edge code output can provide such a conceptual framework. Methods of applying 2PMF are illustrated here with some examples.

  11. Alignment-based and alignment-free methods converge with experimental data on amino acids coded by stop codons at split between nuclear and mitochondrial genetic codes.

    PubMed

    Seligmann, Hervé

    2018-05-01

    Genetic codes mainly evolve by reassigning punctuation codons, starts and stops. Previous analyses assuming that undefined amino acids translate stops showed greater divergence between nuclear and mitochondrial genetic codes. Here, three independent methods converge on which amino acids translated stops at split between nuclear and mitochondrial genetic codes: (a) alignment-free genetic code comparisons inserting different amino acids at stops; (b) alignment-based blast analyses of hypothetical peptides translated from non-coding mitochondrial sequences, inserting different amino acids at stops; (c) biases in amino acid insertions at stops in proteomic data. Hence short-term protein evolution models reconstruct long-term genetic code evolution. Mitochondria reassign stops to amino acids otherwise inserted at stops by codon-anticodon mismatches (near-cognate tRNAs). Hence dual function (translation termination and translation by codon-anticodon mismatch) precedes mitochondrial reassignments of stops to amino acids. Stop ambiguity increases coded information, compensates endocellular mitogenome reduction. Mitochondrial codon reassignments might prevent viral infections. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. A meta-analysis of in-vehicle and nomadic voice-recognition system interaction and driving performance.

    PubMed

    Simmons, Sarah M; Caird, Jeff K; Steel, Piers

    2017-09-01

    Driver distraction is a growing and pervasive issue that requires multiple solutions. Voice-recognition (V-R) systems may decrease the visual-manual (V-M) demands of a wide range of in-vehicle system and smartphone interactions. However, the degree that V-R systems integrated into vehicles or available in mobile phone applications affect driver distraction is incompletely understood. A comprehensive meta-analysis of experimental studies was conducted to address this knowledge gap. To meet study inclusion criteria, drivers had to interact with a V-R system while driving and doing everyday V-R tasks such as dialing, initiating a call, texting, emailing, destination entry or music selection. Coded dependent variables included detection, reaction time, lateral position, speed and headway. Comparisons of V-R systems with baseline driving and/or a V-M condition were also coded. Of 817 identified citations, 43 studies involving 2000 drivers and 183 effect sizes (r) were analyzed in the meta-analysis. Compared to baseline, driving while interacting with a V-R system is associated with increases in reaction time and lane positioning, and decreases in detection. When V-M systems were compared to V-R systems, drivers had slightly better performance with the latter system on reaction time, lane positioning and headway. Although V-R systems have some driving performance advantages over V-M systems, they have a distraction cost relative to driving without any system at all. The pattern of results indicates that V-R systems impose moderate distraction costs on driving. In addition, drivers minimally engage in compensatory performance adjustments such as reducing speed and increasing headway while using V-R systems. Implications of the results for theory, design guidelines and future research are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Petty Cash Custodian Handbook: A ’Guide for Nonappropriated Fund Petty Cash Custodians in Morale, Welfare, and Recreation Activities.

    DTIC Science & Technology

    1988-04-01

    limitations -- taxes V . How to fill out required forms -- AF form 1401 Petty Cash/Refund Voucher -- AF form 2539 NAP Disbursement Request 4N VI. General...AND ACTIVITY CODES ACTIVITY ACTIVITY CODES TITLE CODES TITLE " C2 Lanes Food and Beverage J3 Outside School Program C3 Lanes Pro Shop J4 Comb Fed... Food and Beverage 05 Course Maintenance L z Education E Outdoor Recreation LI Education Services Program El Marina (On-Base) Facility M = Chaplain

  14. PV_LIB Toolbox v. 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-09

    PV_LIB comprises a library of Matlab? code for modeling photovoltaic (PV) systems. Included are functions to compute solar position and to estimate irradiance in the PV system's plane of array, cell temperature, PV module electrical output, and conversion from DC to AC power. Also included are functions that aid in determining parameters for module performance models from module characterization testing. PV_LIB is open source code primarily intended for research and academic purposes. All algorithms are documented in openly available literature with the appropriate references included in comments within the code.

  15. Arctic Ice Dynamics Joint Experiment 1975-1976. Physical Oceanography Data Report, Salinity, Temperature and Depth Data, Camp Blue Fox. Volume II.

    DTIC Science & Technology

    1980-02-01

    to LM b. a w ewe%- ww re mOOc 4" o 0.NWmotvviiOf wt 00 f4Crfl ft -wm o.e. &*1 NO P..w N N o%9 a in - - -da inN 4p m a - U . .......0...V N m...200 1 Attn: Code 428AR 3 Attn: Code 420 a Director Naval Research Laboratory Washington, D.C. 20375 Attn: Library . Code 2620 1 U.S. Naval Research

  16. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    PubMed

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-06-01

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Design Information for Civil Works Housing.

    DTIC Science & Technology

    1984-01-01

    tmm GUIDANCE Lawn Mower , Garden Equipment, Bicycles, etc. 20 ft ^ r" T STORAGE 1-STALL OARAGE 2-STALL QARAQE I I c COMMENTARY !A. Al A...Weatherproof 110-V outlets (as required per ap- plicable code). V. J 17 GUIDANCE Lawn Mower , Garden Equipment, and Bicycles 1-STALL CARPORT

  18. Shielding from space radiations

    NASA Technical Reports Server (NTRS)

    Chang, C. Ken; Badavi, Forooz F.; Tripathi, Ram K.

    1993-01-01

    This Progress Report covering the period of December 1, 1992 to June 1, 1993 presents the development of an analytical solution to the heavy ion transport equation in terms of Green's function formalism. The mathematical development results are recasted into a highly efficient computer code for space applications. The efficiency of this algorithm is accomplished by a nonperturbative technique of extending the Green's function over the solution domain. The code may also be applied to accelerator boundary conditions to allow code validation in laboratory experiments. Results from the isotopic version of the code with 59 isotopes present for a single layer target material, for the case of an iron beam projectile at 600 MeV/nucleon in water is presented. A listing of the single layer isotopic version of the code is included.

  19. The Effect of Strain upon the Velocity of Sound and the Velocity of Free Retraction for Natural Rubber.

    DTIC Science & Technology

    1982-05-01

    28 - DYN 6/81 DISTRIBUTION LIST No. Copies ,No. Cooies Dr. L.V. Schmidt I Dr. F. Roberto I Assistant Secretary of the Navy Code AFRPL MKPA (R,E, and...Scientific Advisor Directorate of Aerosoace Sciences Commandant of the Marine Corps Bolling Air Force Base Code RD-I Washington, D.C. 20332 Washington

  20. Plasma Theory and Simulation.

    DTIC Science & Technology

    1978-07-01

    l l) A paper t i t led “Part icle-Fluid Hybrid Codes Applied to Beam- Plasma , Ring -Plasma Instabi l i ties ” was presented at Monterey (see Section V...ic le-Fluid Hybr id Codes Applied to Beam- Plasma , Ring -Plasma Ins tab i l i t ies”. (2) A. Peiravi and C. K. Birdsall , “Self-Heating of id Therma l

  1. Aeroacoustic Analysis of Turbofan Noise Generation

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.; Envia, Edmane

    1996-01-01

    This report provides an updated version of analytical documentation for the V072 Rotor Wake/Stator Interaction Code. It presents the theoretical derivation of the equations used in the code and, where necessary, it documents the enhancements and changes made to the original code since its first release. V072 is a package of FORTRAN computer programs which calculate the in-duct acoustic modes excited by a fan/stator stage operating in a subsonic mean flow. Sound is generated by the stator vanes interacting with the mean wakes of the rotor blades. In this updated version, only the tonal noise produced at the blade passing frequency and its harmonics, is described. The broadband noise component analysis, which was part of the original report, is not included here. The code provides outputs of modal pressure and power amplitudes generated by the rotor-wake/stator interaction. The rotor/stator stage is modeled as an ensemble of blades and vanes of zero camber and thickness enclosed within an infinite hard-walled annular duct. The amplitude of each propagating mode is computed and summed to obtain the harmonics of sound power flux within the duct for both upstream and downstream propagating modes.

  2. Analysis of localised dose distribution in human body by Monte Carlo code system for photon irradiation.

    PubMed

    Ohnishi, S; Odano, N; Nariyama, N; Saito, K

    2004-01-01

    In usual personal dosimetry, whole body irradiation is assumed. However, the opportunity of partial irradiation is increasing and the tendencies of protection quantities caused under those irradiation conditions are different. The code system has been developed and effective dose and organ absorbed doses have been calculated in the case of horizontal narrow photon beam irradiated from various directions at three representative body sections, 40, 50 and 60 cm originating from the top of the head. This work covers 24 beam directions, each 15 degrees angle ranging from 0 degrees to 345 degrees, three energy levels, 45 keV, 90 keV and 1.25 MeV, and three beam diameters of 1, 2 and 4 cm. These results show that the beam injected from diagonally front or other specific direction causes peak dose in the case of partial irradiation.

  3. Analysis of dose-LET distribution in the human body irradiated by high energy hadrons.

    PubMed

    Sato, T; Tsuda, S; Sakamoto, Y; Yamaguchi, Y; Niita, K

    2003-01-01

    For the purposes of radiological protection, it is important to analyse profiles of the particle field inside a human body irradiated by high energy hadrons, since they can produce a variety of secondary particles which play an important role in the energy deposition process, and characterise their radiation qualities. Therefore Monte Carlo calculations were performed to evaluate dose distributions in terms of the linear energy transfer of ionising particles (dose-LET distribution) using a newly developed particle transport code (Particle and Heavy Ion Transport code System, PHITS) for incidences of neutrons, protons and pions with energies from 100 MeV to 200 GeV. Based on these calculations, it was found that more than 80% and 90% of the total deposition energies are attributed to ionisation by particles with LET below 10 keV microm(-1) for the irradiations of neutrons and the charged particles, respectively.

  4. Simulation of radiation in laser produced plasmas

    NASA Astrophysics Data System (ADS)

    Colombant, D. G.; Klapisch, M.; Deniz, A. V.; Weaver, J.; Schmitt, A.

    1999-11-01

    The radiation hydrodynamics code FAST1D(J.H.Gardner,A.J.Schmitt,J.P.Dahlburg,C.J.Pawley,S.E.Bodner,S.P.Obenschain,V.Serlin and Y.Aglitskiy,Phys. Plasmas,5,1935(1998)) was used directly (i.e. without postprocessor) to simulate radiation emitted from flat targets irradiated by the Nike laser, from 10^12 W/cm^2 to 10^13W/cm^2. We use enough photon groups to resolve spectral lines. Opacities are obtained from the STA code(A.Bar-Shalom,J.Oreg,M.Klapisch and T.Lehecka,Phys.Rev.E,59,3512(1999)), and non LTE effects are described with the Busquet model(M.Busquet,Phys.Fluids B,5,4191(1993)). Results are compared to transmission grating spectra in the range 100-600eV, and to time-resolved calibrated filtered diodes (spectral windows around 100, 180, 280 and 450 eV).

  5. Modeling multi-GeV class laser-plasma accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim

    2016-10-01

    Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.

  6. Extension of the energy range of the experimental activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium up to 65MeV.

    PubMed

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-04-01

    Activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium were extended up to 65MeV by using stacked foil irradiation and gamma spectrometry experimental methods. Experimental cross-sections data for the formation of the radionuclides (159)Dy, (157)Dy, (155)Dy, (161)Tb, (160)Tb, (156)Tb, (155)Tb, (154m2)Tb, (154m1)Tb, (154g)Tb, (153)Tb, (152)Tb and (151)Tb are reported in the 36-65MeV energy range, and compared with an old dataset from 1964. The experimental data were also compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS nuclear reaction model code as listed in the latest on-line libraries TENDL 2013. Copyright © 2015. Published by Elsevier Ltd.

  7. Draft genome sequence of four coccolithoviruses: Emiliania huxleyi virus EhV-88, EhV-201, EhV-207, and EhV-208.

    PubMed

    Nissimov, Jozef I; Worthy, Charlotte A; Rooks, Paul; Napier, Johnathan A; Kimmance, Susan A; Henn, Matthew R; Ogata, Hiroyuki; Allen, Michael J

    2012-03-01

    The Coccolithoviridae are a group of viruses which infect the marine coccolithophorid microalga Emiliania huxleyi. The Emiliania huxleyi viruses (known as EhVs) described herein have 160- to 180-nm diameter icosahedral structures, have genomes of approximately 400 kbp, and consist of more than 450 predicted coding sequences (CDSs). Here, we describe the genomic features of four newly sequenced coccolithoviruses (EhV-88, EhV-201, EhV-207, and EhV-208) together with their draft genome sequences and their annotations, highlighting the homology and heterogeneity of these genomes to the EhV-86 model reference genome.

  8. ANITA-2000 activation code package - updating of the decay data libraries and validation on the experimental data of the 14 MeV Frascati Neutron Generator

    NASA Astrophysics Data System (ADS)

    Frisoni, Manuela

    2016-03-01

    ANITA-2000 is a code package for the activation characterization of materials exposed to neutron irradiation released by ENEA to OECD-NEADB and ORNL-RSICC. The main component of the package is the activation code ANITA-4M that computes the radioactive inventory of a material exposed to neutron irradiation. The code requires the decay data library (file fl1) containing the quantities describing the decay properties of the unstable nuclides and the library (file fl2) containing the gamma ray spectra emitted by the radioactive nuclei. The fl1 and fl2 files of the ANITA-2000 code package, originally based on the evaluated nuclear data library FENDL/D-2.0, were recently updated on the basis of the JEFF-3.1.1 Radioactive Decay Data Library. This paper presents the results of the validation of the new fl1 decay data library through the comparison of the ANITA-4M calculated values with the measured electron and photon decay heats and activities of fusion material samples irradiated at the 14 MeV Frascati Neutron Generator (FNG) of the NEA-Frascati Research Centre. Twelve material samples were considered, namely: Mo, Cu, Hf, Mg, Ni, Cd, Sn, Re, Ti, W, Ag and Al. The ratios between calculated and experimental values (C/E) are shown and discussed in this paper.

  9. Analysis of Proton Radiation Effects on Gallium Nitride High Electron Mobility Transistors

    DTIC Science & Technology

    2017-03-01

    energy levels on a GaN-on-silicon high electron mobility transistor was created. Based on physical results of 2.0-MeV protons irradiation to fluence...and the physical device at 2.0-MeV proton irradiation , predictions were made for 5.0, 10.0, 20.0 and 40.0-MeV proton irradiation . The model generally...nitride, high electron mobility transistor, electronics, 2 MeV proton irradiation , radiation effects 15. NUMBER OF PAGES 87 16. PRICE CODE 17. SECURITY

  10. Draft genome sequence of the coccolithovirus Emiliania huxleyi virus 202.

    PubMed

    Nissimov, Jozef I; Worthy, Charlotte A; Rooks, Paul; Napier, Johnathan A; Kimmance, Susan A; Henn, Matthew R; Ogata, Hiroyuki; Allen, Michael J

    2012-02-01

    Emiliania huxleyi virus 202 (EhV-202) is a member of the Coccolithoviridae, a group of viruses that infect the marine coccolithophorid Emiliania huxleyi. EhV-202 has a 160- to 180-nm-diameter icosahedral structure and a genome of approximately 407 kbp, consisting of 485 coding sequences (CDSs). Here we describe the genomic features of EhV-202, together with a draft genome sequence and its annotation, highlighting the homology and heterogeneity of this genome in comparison with the EhV-86 reference genome.

  11. Experimental measurements with Monte Carlo corrections and theoretical calculations of neutron inelastic scattering cross section of 115In

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Xiao, Jun; Luo, Xiaobing

    2016-10-01

    The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.

  12. Nuclear Reaction Models Responsible for Simulation of Neutron-induced Soft Errors in Microelectronics

    NASA Astrophysics Data System (ADS)

    Watanabe, Y.; Abe, S.

    2014-06-01

    Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.

  13. A retroviral oncogene, akt, encoding a serine-threonine kinase containing an SH2-like region.

    PubMed

    Bellacosa, A; Testa, J R; Staal, S P; Tsichlis, P N

    1991-10-11

    The v-akt oncogene codes for a 105-kilodalton fusion phosphoprotein containing Gag sequences at its amino terminus. Sequence analysis of v-akt and biochemical characterization of its product revealed that it codes for a protein kinase C-related serine-threonine kinase whose cellular homolog is expressed in most tissues, with the highest amount found in thymus. Although Akt is a serine-threonine kinase, part of its regulatory region is similar to the Src homology-2 domain, a structural motif characteristic of cytoplasmic tyrosine kinases that functions in protein-protein interactions. This suggests that Akt may form a functional link between tyrosine and serine-threonine phosphorylation pathways.

  14. SCELib3.0: The new revision of SCELib, the parallel computational library of molecular properties in the Single Center Approach

    NASA Astrophysics Data System (ADS)

    Sanna, N.; Baccarelli, I.; Morelli, G.

    2009-12-01

    SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular system and runtime parameters Classification: 16.5 Catalogue identifier of previous version: ADMG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 162 (2004) 51 External routines: CUDA libraries (SDK V2.x). Does the new version supersede the previous version?: Yes Nature of problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and correlation/polarization potentials can then be used in a wide variety of applications, such as electron-molecule scattering calculations, quantum chemistry studies, biomodelling and drug design. Solution method: The polycentre Hartree-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ,φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behavior for the leading dipole molecular polarizabilities. Reasons for new version: The present release of SCELib allows the study of larger molecular systems with respect to the previous versions by means of theoretical and technological advances, with the first implementation of the code over a many-core computing system. Summary of revisions: The major features added with respect to SCELib Version 2.0 are molecular wavefunctions obtained via the Los Alamos (Hay and Wadt) LAN ECP plus DZ description of the inner-shell electrons (on Na-La, Hf-Bi elements) [1] can now be single-center-expanded; the addition required modifications of: (i) the filtering code readgau, (ii) the main reading function setinp, (iii) the sphint code (including changes to the CalcMO code), (iv) the densty code, (v) the vst code; the classes of platforms supported now include two more architectures based on accelerated coprocessors (Nvidia GSeries GPGPU and ClearSpeed e720 (ClearSpeed version, experimental; initial preliminary porting of the sphint() function not for production runs - see the code documentation for additional detail). A single-precision representation for real numbers in the SCE mapping of the GTOs ( sphint code), has been implemented into the new code; the I h symmetry point group for the molecular systems has been added to those already allowed in the SCE procedure; the orientation of the molecular axis system for the Cs (planar) symmetry has been changed in accord with the standard orientation adopted by the latest version of the quantum chemistry code (Gaussian C03 [2]), which is used to generate the input multi-centre molecular wavefunctions ( z-axis perpendicular to the symmetry plane); the abelian subgroup for the Cs point group has been changed from C 1 to Cs; atomic basis functions including g-type GTOs can now be single-center-expanded. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. The parallel GP-GPU implementation limits the number of CPU threads to the number of GPU cores present. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r,θ,φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. References:[1] P.J. Hay, W.R. Wadt, J. Chem. Phys. 82 (1985) 270; W.R. Wadt, P.J. Hay, J. Chem. Phys. 284 (1985);P.J. Hay, W.R. Wadt, J. Chem. Phys. 299 (1985). [2] M.J. Frisch et al., Gaussian 03, revision C.02, Gaussian, Inc., Wallingford, CT, 2004.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ke; Zhang, Yanwen; Zhu, Zihua

    Accurate information of electronic stopping power is fundamental for broad advances in electronic industry, space exploration, national security, and sustainable energy technologies. The Stopping and Range of Ions in Matter (SRIM) code has been widely applied to predict stopping powers and ion distributions for decades. Recent experimental results have, however, shown considerable errors in the SRIM predictions for stopping of heavy ions in compounds containing light elements, indicating an urgent need to improve current stopping power models. The electronic stopping powers of 35Cl, 80Br, 127I, and 197Au ions are experimentally determined in two important functional materials, SiC and SiO2, frommore » tens to hundreds keV/u based on a single ion technique. By combining with the reciprocity theory, new electronic stopping powers are suggested in a region from 0 to 15 MeV, where large deviations from SRIM predictions are observed. For independent experimental validation of the electronic stopping powers we determined, Rutherford backscattering spectrometry (RBS) and secondary ion mass spectrometry (SIMS) are utilized to measure the depth profiles of implanted Au ions in SiC with energies from 700 keV to 15 MeV. The measured ion distributions from both RBS and SIMS are considerably deeper (up to ~30%) than the predictions from the commercial SRIM code. In comparison, the new electronic stopping power values are utilized in a modified TRIM-85 (the original version of the SRIM) code, M-TRIM, to predict ion distributions, and the results are in good agreement with the experimentally measured ion distributions.« less

  16. Albany v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salinger, Andrew; Phipps, Eric; Ostien, Jakob

    2016-01-13

    The Albany code is a general-purpose finite element code for solving partial differential equations (PDEs). Albany is a research code that demonstrates how a PDE code can be built by interfacing many of the open-source software libraries that are released under Sandia's Trilinos project. Part of the mission of Albany is to be a testbed for new Trilinos libraries, to refine their methods, usability, and interfaces. Albany includes hooks to optimization and uncertainty quantification algorithms, including those in Trilinos as well as those in the Dakota toolkit. Because of this, Albany is a desirable starting point for new code developmentmore » efforts that wish to make heavy use of Trilinos. Albany is both a framework and the host for specific finite element applications. These applications have project names, and can be controlled by configuration option when the code is compiled, but are all developed and released as part of the single Albany code base, These include LCM, QCAD, FELIX, Aeras, and ATO applications.« less

  17. Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Jalas, Soeren

    The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less

  18. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  19. Microcomputer Control of a Hydraulically Actuated Piston.

    DTIC Science & Technology

    1987-06-01

    EhhhohEohEmhhE EhhmhhhohhhhhI M1l *2 112.2 Ll 6 111111.258 MICROCOPY RESOLUfION TEST CHART NATIONAL BUREAUJ nF SIANDARDS 1963 A W* %i r f U V ~ S i V...SYSTE.M............................I( E. I REQUENCY RESPONSE TEST ........................... F. MODEL V.ALIDATION ................................. 2...O RITH M (BA SIC) ................................. 43 APPENDIX D: DIGITAL SYSTEM SIMULATION CODE (DSL) ........... 44 APPENDIX E: DIGITAL LOGIC TEST

  20. Radial dependence of lineal energy distribution of 290-MeV/u carbon and 500-MeV/u iron ion beams using a wall-less tissue-equivalent proportional counter.

    PubMed

    Tsuda, Shuichi; Sato, Tatsuhiko; Watanabe, Ritsuko; Takada, Masashi

    2015-01-01

    Using a wall-less tissue-equivalent proportional counter for a 0.72-μm site in tissue, we measured the radial dependence of the lineal energy distribution, yf(y), of 290-MeV/u carbon ions and 500-MeV/u iron ion beams. The measured yf(y) distributions and the dose-mean of y, [Formula: see text], were compared with calculations performed with the track structure simulation code TRACION and the microdosimetric function of the Particle and Heavy Ion Transport code System (PHITS). The values of the measured [Formula: see text] were consistent with calculated results within an error of 2%, but differences in the shape of yf(y) were observed for iron ion irradiation. This result indicates that further improvement of the calculation model for yf(y) distribution in PHITS is needed for the analytical function that describes energy deposition by delta rays, particularly for primary ions having linear energy transfer in excess of a few hundred keV μm(-1). © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  1. Assessment of Current Jet Noise Prediction Capabilities

    NASA Technical Reports Server (NTRS)

    Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas

    2008-01-01

    An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.

  2. Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core Concrete Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R; Farmer, Mitchell; Francis, Matthew W

    Lower head failure and corium concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH.« less

  3. Fukushima Daiichi Unit 1 ex-vessel prediction: Core melt spreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, M. T.; Robb, K. R.; Francis, M. W.

    Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis has been carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially-dependent melt conditions and extent of spreading during relocation from the vessel. Lastly, this information was then used as input for the long-term debris coolability analysis with CORQUENCH that is reported in a companion paper.« less

  4. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE PAGES

    Kerby, Leslie M.; Mashnik, Stepan G.

    2015-05-14

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  5. V27 Test Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stofleth, Jerome H.; Tribble, Megan Kimberly; Crocker, Robert W.

    2017-05-01

    The V27 containment vessel was procured by the US Army Recovered Chemical Material Directorate ( RCMD ) as a replacement vessel for use on the P2 Explosive Destruction Systems. It is the third EDS vessel to be fabricated under Code Case 2564 of the ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the Code Case, is nine (9) pounds TNT - equivalent for up to 637 detonations . This report documents the results of explosive tests that were done on the vessel at Sandiamore » National Laboratories in Albuquerque New Mexico to qualify the vessel for explosive use . The primary qualification test consisted of si x 1.5 pound charges of Composition C - 4 (equivalent to 11.25 pounds TNT) distributed around the vessel in accordance with the User Design Specification. Four subsequent tests using less explosive evaluated the effects of slight variations in orientation of the charges . All vessel acceptance criteria were met.« less

  6. Heat transfer, thermal stress analysis and the dynamic behaviour of high power RF structures. [MARC and SUPERFISH codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKeown, J.; Labrie, J.P.

    1983-08-01

    A general purpose finite element computer code called MARC is used to calculate the temperature distribution and dimensional changes in linear accelerator rf structures. Both steady state and transient behaviour are examined with the computer model. Combining results from MARC with the cavity evaluation computer code SUPERFISH, the static and dynamic behaviour of a structure under power is investigated. Structure cooling is studied to minimize loss in shunt impedance and frequency shifts during high power operation. Results are compared with an experimental test carried out on a cw 805 MHz on-axis coupled structure at an energy gradient of 1.8 MeV/m.more » The model has also been used to compare the performance of on-axis and coaxial structures and has guided the mechanical design of structures suitable for average gradients in excess of 2.0 MeV/m at 2.45 GHz.« less

  7. Large-scale two-photon imaging revealed super-sparse population codes in the V1 superficial layer of awake monkeys.

    PubMed

    Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing

    2018-04-26

    One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.

  8. MEASUREMENTS OF NEUTRON SPECTRA IN 0.8-GEV AND 1.6-GEV PROTON-IRRADIATED<2 OF 2>NA THICK TARGETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Titarenko, Y. E.; Batyaev, V. F.; Zhivun, V. M.

    2001-01-01

    Measurements of neutron spectra in W, and Na targets irradiated by 0.8 GeV and 1.6 GeV protons are presented. Measurements were made by the TOF techniques using the proton beam from ITEP U-10 synchrotron. Neutrons were detected with BICRON-511 liquid scintillator-based detectors. The neutron detection efficiency was calculated via the SCINFUL and CECIL codes. The W results are compared with the similar data obtained elsewhere. The measured neutron spectra are compared with the LAHET and CEM2k code simulations results. Attempt is made to explain some observed disagreements between experiments and simulations. The presented results are of interest both in termsmore » of nuclear data buildup and as a benchmark of the up-to-date predictive power of the simulation codes used in designing the hybrid accelerator-driven system (ADS) facilities with sodium-cooled tungsten targets.« less

  9. Fukushima Daiichi Unit 1 ex-vessel prediction: Core melt spreading

    DOE PAGES

    Farmer, M. T.; Robb, K. R.; Francis, M. W.

    2016-10-31

    Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis has been carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially-dependent melt conditions and extent of spreading during relocation from the vessel. Lastly, this information was then used as input for the long-term debris coolability analysis with CORQUENCH that is reported in a companion paper.« less

  10. AMPX-77: A modular code system for generating coupled multigroup neutron-gamma cross-section libraries from ENDF/B-IV and/or ENDF/B-V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, N.M.; Ford, W.E. III; Petrie, L.M.

    AMPX-77 is a modular system of computer programs that pertain to nuclear analyses, with a primary emphasis on tasks associated with the production and use of multigroup cross sections. AH basic cross-section data are to be input in the formats used by the Evaluated Nuclear Data Files (ENDF/B), and output can be obtained in a variety of formats, including its own internal and very general formats, along with a variety of other useful formats used by major transport, diffusion theory, and Monte Carlo codes. Processing is provided for both neutron and gamma-my data. The present release contains codes all writtenmore » in the FORTRAN-77 dialect of FORTRAN and wig process ENDF/B-V and earlier evaluations, though major modules are being upgraded in order to process ENDF/B-VI and will be released when a complete collection of usable routines is available.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dey, Ritu; Ghosh, Joydeep; Chowdhuri, M. B.

    Neutral particle behavior in Aditya tokamak, which has a circular poloidal ring limiter at one particular toroidal location, has been investigated using DEGAS2 code. The code is based on the calculation using Monte Carlo algorithms and is mainly used in tokamaks with divertor configuration. This code has been successfully implemented in Aditya tokamak with limiter configuration. The penetration of neutral hydrogen atom is studied with various atomic and molecular contributions and it is found that the maximum contribution comes from the dissociation processes. For the same, H α spectrum is also simulated which was matched with the experimental one. Themore » dominant contribution around 64% comes from molecular dissociation processes and neutral particle is generated by those processes have energy of ~ 2.0 eV. Furthermore, the variation of neutral hydrogen density and H α emissivity profile are analysed for various edge temperature profiles and found that there is not much changes in H α emission at the plasma edge with the variation of edge temperature (7 to 40 eV).« less

  12. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie M.; Mashnik, Stepan G.

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  13. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functionalmore » characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.« less

  14. Spatial Tuning Shifts Increase the Discriminability and Fidelity of Population Codes in Visual Cortex

    PubMed Central

    2017-01-01

    Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794

  15. Performances of a date dissemination code on telephone lines using commercial modems

    NASA Technical Reports Server (NTRS)

    Cordara, F.; Pettiti, V.; Quasso, R.; Rubiola, E.

    1993-01-01

    A coded time/date information dissemination system (CTD), based on telephone lines and commercial modems, is now in its experimental phase in Italy at IEN. This service, born from a cooperation with other metrological laboratories (TUG, Austria, SNT, Sweden, VSL, The Netherlands), represents an attempt towards an European standardization. Some results of an experimental analysis in which a few modems were tested, both in laboratory conditions and connected to the telephone network, in order to evaluate the timing capability of the system are given. When the system is used in a one-way mode, in many practical cases the modems delay turns out to be the main factor which limits the accuracy, even more than the telephone line delays. If the two-way mode is used, the modems asymmetry, i.e., the delay difference between transmission and reception, is almost always the most important source of uncertainty, provided the link is not including a space segment. Comparing the widely used V.22 modems to the old V.21 ones, the latters turn out to be better both in delay time (30-100 ms V.22, and 7-15 ms V.21) and asymmetry (10-50 micro-s V.22, and 10 ms V.22). Time transfer accuracies of 10 micron-s (same turn) to 100 micro-s (long distance calls) were obtained in two-way mode with commercial V.21 modems.

  16. Imaging of spatially extended hot spots with coded apertures for intra-operative nuclear medicine applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Potiriadis, C.; Karafasoulis, K.; Loukas, D.; Lambropoulos, C. P.

    2017-01-01

    Coded aperture imaging transcends planar imaging with conventional collimators in efficiency and Field of View (FOV). We present experimental results for the detection of 141 keV and 122 keV γ-photons emitted by uniformly extended 99mTc and 57Co hot-spots along with simulations of uniformly and normally extended 99mTc hot-spots. These results prove that the method can be used for intra-operative imaging of radio-traced sentinel nodes and thyroid remnants. The study is performed using a setup of two gamma cameras, each consisting of a coded-aperture (or mask) of Modified Uniformly Redundant Array (MURA) of rank 19 positioned on top of a CdTe detector. The detector pixel pitch is 350 μm and its active area is 4.4 × 4.4 cm2, while the mask element size is 1.7 mm. The detectable photon energy ranges from 15 keV up to 200 keV with an energy resolution of 3-4 keV FWHM. Triangulation is exploited to estimate the 3D spatial coordinates of the radioactive spots within the system FOV. Two extended sources, with uniform distributed activity (11 and 24 mm in diameter, respectively), positioned at 16 cm from the system and with 3 cm distance between their centers, can be resolved and localized with accuracy better than 5%. The results indicate that the estimated positions of spatially extended sources lay within their volume size and that neighboring sources, even with a low level of radioactivity, such as 30 MBq, can be clearly distinguished with an acquisition time about 3 seconds.

  17. Verification of the New FAST v8 Capabilities for the Modeling of Fixed-Bottom Offshore Wind Turbines: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barahona, B.; Jonkman, J.; Damiani, R.

    2014-12-01

    Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshoremore » Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.« less

  18. ChIPBase v2.0: decoding transcriptional regulatory networks of non-coding RNAs and protein-coding genes from ChIP-seq data.

    PubMed

    Zhou, Ke-Ren; Liu, Shun; Sun, Wen-Ju; Zheng, Ling-Ling; Zhou, Hui; Yang, Jian-Hua; Qu, Liang-Hu

    2017-01-04

    The abnormal transcriptional regulation of non-coding RNAs (ncRNAs) and protein-coding genes (PCGs) is contributed to various biological processes and linked with human diseases, but the underlying mechanisms remain elusive. In this study, we developed ChIPBase v2.0 (http://rna.sysu.edu.cn/chipbase/) to explore the transcriptional regulatory networks of ncRNAs and PCGs. ChIPBase v2.0 has been expanded with ∼10 200 curated ChIP-seq datasets, which represent about 20 times expansion when comparing to the previous released version. We identified thousands of binding motif matrices and their binding sites from ChIP-seq data of DNA-binding proteins and predicted millions of transcriptional regulatory relationships between transcription factors (TFs) and genes. We constructed 'Regulator' module to predict hundreds of TFs and histone modifications that were involved in or affected transcription of ncRNAs and PCGs. Moreover, we built a web-based tool, Co-Expression, to explore the co-expression patterns between DNA-binding proteins and various types of genes by integrating the gene expression profiles of ∼10 000 tumor samples and ∼9100 normal tissues and cell lines. ChIPBase also provides a ChIP-Function tool and a genome browser to predict functions of diverse genes and visualize various ChIP-seq data. This study will greatly expand our understanding of the transcriptional regulations of ncRNAs and PCGs. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    PubMed

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Fast and accurate modeling of stray light in optical systems

    NASA Astrophysics Data System (ADS)

    Perrin, Jean-Claude

    2017-11-01

    The first problem to be solved in most optical designs with respect to stray light is that of internal reflections on the several surfaces of individual lenses and mirrors, and on the detector itself. The level of stray light ratio can be considerably reduced by taking into account the stray light during the optimization to determine solutions in which the irradiance due to these ghosts is kept to the minimum possible value. Unhappily, the routines available in most optical design software's, for example CODE V, do not permit all alone to make exact quantitative calculations of the stray light due to these ghosts. Therefore, the engineer in charge of the optical design is confronted to the problem of using two different software's, one for the design and optimization, for example CODE V, one for stray light analysis, for example ASAP. This makes a complete optimization very complex . Nevertheless, using special techniques and combinations of the routines available in CODE V, it is possible to have at its disposal a software macro tool to do such an analysis quickly and accurately, including Monte-Carlo ray tracing, or taking into account diffraction effects. This analysis can be done in a few minutes, to be compared to hours with other software's.

  1. Code-division-multiplexed readout of large arrays of TES microcalorimeters

    NASA Astrophysics Data System (ADS)

    Morgan, K. M.; Alpert, B. K.; Bennett, D. A.; Denison, E. V.; Doriese, W. B.; Fowler, J. W.; Gard, J. D.; Hilton, G. C.; Irwin, K. D.; Joe, Y. I.; O'Neil, G. C.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Swetz, D. S.

    2016-09-01

    Code-division multiplexing (CDM) offers a path to reading out large arrays of transition edge sensor (TES) X-ray microcalorimeters with excellent energy and timing resolution. We demonstrate the readout of X-ray TESs with a 32-channel flux-summed code-division multiplexing circuit based on superconducting quantum interference device (SQUID) amplifiers. The best detector has energy resolution of 2.28 ± 0.12 eV FWHM at 5.9 keV and the array has mean energy resolution of 2.77 ± 0.02 eV over 30 working sensors. The readout channels are sampled sequentially at 160 ns/row, for an effective sampling rate of 5.12 μs/channel. The SQUID amplifiers have a measured flux noise of 0.17 μΦ0/√Hz (non-multiplexed, referred to the first stage SQUID). The multiplexed noise level and signal slew rate are sufficient to allow readout of more than 40 pixels per column, making CDM compatible with requirements outlined for future space missions. Additionally, because the modulated data from the 32 SQUID readout channels provide information on each X-ray event at the row rate, our CDM architecture allows determination of the arrival time of an X-ray event to within 275 ns FWHM with potential benefits in experiments that require detection of near-coincident events.

  2. MCNP6 Simulation of Light and Medium Nuclei Fragmentation at Intermediate Energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie

    2015-05-22

    MCNP6, the latest and most advanced LANL Monte Carlo transport code, representing a merger of MCNP5 and MCNPX, is actually much more than the sum of those two computer codes; MCNP6 is available to the public via RSICC at Oak Ridge, TN, USA. In the present work, MCNP6 was validated and verified (V&V) against different experimental data on intermediate-energy fragmentation reactions, and results by several other codes, using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.03 and LAQGSM03.03. It was found that MCNP6 usingmore » CEM03.03 and LAQGSM03.03 describes well fragmentation reactions induced on light and medium target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below, and can serve as a reliable simulation tool for different applications, like cosmic-ray-induced single event upsets (SEU’s), radiation protection, and cancer therapy with proton and ion beams, to name just a few. Future improvements of the predicting capabilities of MCNP6 for such reactions are possible, and are discussed in this work.« less

  3. Validation of CESAR Thermal-hydraulic Module of ASTEC V1.2 Code on BETHSY Experiments

    NASA Astrophysics Data System (ADS)

    Tregoures, Nicolas; Bandini, Giacomino; Foucher, Laurent; Fleurot, Joëlle; Meloni, Paride

    The ASTEC V1 system code is being jointly developed by the French Institut de Radioprotection et Sûreté Nucléaire (IRSN) and the German Gesellschaft für Anlagen und ReaktorSicherheit (GRS) to address severe accident sequences in a nuclear power plant. Thermal-hydraulics in primary and secondary system is addressed by the CESAR module. The aim of this paper is to present the validation of the CESAR module, from the ASTEC V1.2 version, on the basis of well instrumented and qualified integral experiments carried out in the BETHSY facility (CEA, France), which simulates a French 900 MWe PWR reactor. Three tests have been thoroughly investigated with CESAR: the loss of coolant 9.1b test (OECD ISP N° 27), the loss of feedwater 5.2e test, and the multiple steam generator tube rupture 4.3b test. In the present paper, the results of the code for the three analyzed tests are presented in comparison with the experimental data. The thermal-hydraulic behavior of the BETHSY facility during the transient phase is well reproduced by CESAR: the occurrence of major events and the time evolution of main thermal-hydraulic parameters of both primary and secondary circuits are well predicted.

  4. A verification of the gyrokinetic microstability codes GEM, GYRO, and GS2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bravenec, R. V.; Chen, Y.; Wan, W.

    2013-10-15

    A previous publication [R. V. Bravenec et al., Phys. Plasmas 18, 122505 (2011)] presented favorable comparisons of linear frequencies and nonlinear fluxes from the Eulerian gyrokinetic codes gyro[J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and gs2[W. Dorland et al., Phys. Rev. Lett. 85, 5579 (2000)]. The motivation was to verify the codes, i.e., demonstrate that they correctly solve the gyrokinetic-Maxwell equations. The premise was that it is highly unlikely for both codes to yield the same incorrect results. In this work, we add the Lagrangian particle-in-cell code gem[Y. Chen and S. Parker, J. Comput. Phys.more » 220, 839 (2007)] to the comparisons, not simply to add another code, but also to demonstrate that the codes' algorithms do not matter. We find good agreement of gem with gyro and gs2 for the plasma conditions considered earlier, thus establishing confidence that the codes are verified and that ongoing validation efforts for these plasma parameters are warranted.« less

  5. Embedding QR codes in tumor board presentations, enhancing educational content for oncology information management.

    PubMed

    Siderits, Richard; Yates, Stacy; Rodriguez, Arelis; Lee, Tina; Rimmer, Cheryl; Roche, Mark

    2011-01-01

    Quick Response (QR) Codes are standard in supply management and seen with increasing frequency in advertisements. They are now present regularly in healthcare informatics and education. These 2-dimensional square bar codes, originally designed by the Toyota car company, are free of license and have a published international standard. The codes can be generated by free online software and the resulting images incorporated into presentations. The images can be scanned by "smart" phones and tablets using either the iOS or Android platforms, which link the device with the information represented by the QR code (uniform resource locator or URL, online video, text, v-calendar entries, short message service [SMS] and formatted text). Once linked to the device, the information can be viewed at any time after the original presentation, saved in the device or to a Web-based "cloud" repository, printed, or shared with others via email or Bluetooth file transfer. This paper describes how we use QR codes in our tumor board presentations, discusses the benefits, the different QR codes from Web links and how QR codes facilitate the distribution of educational content.

  6. Proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification

    NASA Technical Reports Server (NTRS)

    Ewen, Denney, W. (Editor); Jensen, Thomas (Editor)

    2009-01-01

    This NASA conference publication contains the proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification, held as part of LICS in Los Angeles, CA, USA, on August 15, 2009. Software certification demonstrates the reliability, safety, or security of software systems in such a way that it can be checked by an independent authority with minimal trust in the techniques and tools used in the certification process itself. It can build on existing validation and verification (V&V) techniques but introduces the notion of explicit software certificates, Vvilich contain all the information necessary for an independent assessment of the demonstrated properties. One such example is proof-carrying code (PCC) which is an important and distinctive approach to enhancing trust in programs. It provides a practical framework for independent assurance of program behavior; especially where source code is not available, or the code author and user are unknown to each other. The workshop wiII address theoretical foundations of logic-based software certification as well as practical examples and work on alternative application domains. Here "certificate" is construed broadly, to include not just mathematical derivations and proofs but also safety and assurance cases, or any fonnal evidence that supports the semantic analysis of programs: that is, evidence about an intrinsic property of code and its behaviour that can be independently checked by any user, intermediary, or third party. These guarantees mean that software certificates raise trust in the code itself, distinct from and complementary to any existing trust in the creator of the code, the process used to produce it, or its distributor. In addition to the contributed talks, the workshop featured two invited talks, by Kelly Hayhurst and Andrew Appel. The PCC 2009 website can be found at http://ti.arc.nasa.gov /event/pcc 091.

  7. Beam tracking simulation in the central region of a 13 MeV PET cyclotron

    NASA Astrophysics Data System (ADS)

    Anggraita, Pramudita; Santosa, Budi; Taufik, Mulyani, Emy; Diah, Frida Iswinning

    2012-06-01

    This paper reports the trajectories simulation of proton beam in the central region of a 13 MeV PET cyclotron, operating with negative proton beam (for easier beam extraction using a stripper foil), 40 kV peak accelerating dee voltage at fourth harmonic frequency of 77.88 MHz, and average magnetic field of 1.275 T. The central region covers fields of 240mm × 240mm × 30mm size at 1mm resolution. The calculation was also done at finer 0.25mm resolution covering fields of 30mm × 30mm × 4mm size to see the effects of 0.55mm horizontal width of the ion source window and the halted trajectories of positive proton beam. The simulations show up to 7 turns of orbital trajectories, reaching about 1 MeV of beam energy. The distribution of accelerating electric fields and magnetic fields inside the cyclotron were calculated in 3 dimension using Opera3D code and Tosca modules for static magnetic and electric fields. The trajectory simulation was carried out using Scilab 5.3.3 code.

  8. Monte Carlo calculation for the development of a BNCT neutron source (1eV-10KeV) using MCNP code.

    PubMed

    El Moussaoui, F; El Bardouni, T; Azahra, M; Kamili, A; Boukhal, H

    2008-09-01

    Different materials have been studied in order to produce the epithermal neutron beam between 1eV and 10KeV, which are extensively used to irradiate patients with brain tumors such as GBM. For this purpose, we have studied three different neutrons moderators (H(2)O, D(2)O and BeO) and their combinations, four reflectors (Al(2)O(3), C, Bi, and Pb) and two filters (Cd and Bi). Results of calculation showed that the best obtained assembly configuration corresponds to the combination of the three moderators H(2)O, BeO and D(2)O jointly to Al(2)O(3) reflector and two filter Cd+Bi optimize the spectrum of the epithermal neutron at 72%, and minimize the thermal neutron to 4% and thus it can be used to treat the deep tumor brain. The calculations have been performed by means of the Monte Carlo N (particle code MCNP 5C). Our results strongly encourage further studying of irradiation of the head with epithermal neutron fields.

  9. SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang

    2018-06-01

    SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.

  10. Gene end-like sequences within the 3' non-coding region of the Nipah virus genome attenuate viral gene transcription.

    PubMed

    Sugai, Akihiro; Sato, Hiroki; Yoneda, Misako; Kai, Chieko

    2017-08-01

    The regulation of transcription during Nipah virus (NiV) replication is poorly understood. Using a bicistronic minigenome system, we investigated the involvement of non-coding regions (NCRs) in the transcriptional re-initiation efficiency of NiV RNA polymerase. Reporter assays revealed that attenuation of NiV gene expression was not constant at each gene junction, and that the attenuating property was controlled by the 3' NCR. However, this regulation was independent of the gene-end, gene-start and intergenic regions. Northern blot analysis indicated that regulation of viral gene expression by the phosphoprotein (P) and large protein (L) 3' NCRs occurred at the transcription level. We identified uridine-rich tracts within the L 3' NCR that are similar to gene-end signals. These gene-end-like sequences were recognized as weak transcription termination signals by the viral RNA polymerase, thereby reducing downstream gene transcription. Thus, we suggest that NiV has a unique mechanism of transcriptional regulation. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Differential diagnosis of "Religious or Spiritual Problem" - possibilities and limitations implied by the V-code 62.89 in DSM-5.

    PubMed

    Prusak, Jacek

    2016-01-01

    Introduction : Work over preparation of DSM-5 has been a stimulus for research and reflection over the impact of religious/spiritual factors on phenomenology, differential diagnosis, course, outcome and prognosis of mental disorders. The aim of this paper is to present the attitude of DSM towards religion and spirituality in the clinical context. Even though DSM is not in use in Poland, in contrast to ICD, it gives a different, not only psychopathological, look at religious or spiritual problems. The paper is based on in-depth analysis of V-code 62.89 ("Religious or spiritual problem") from historical, theoretical and clinical perspective. The introduction of non-reductive approach to religious and spiritual problems to DSM can be considered as a manifestation of the development of this psychiatric classification with regard to the differential diagnosis between religion and spirituality and psychopathology. By placing religion and spirituality mainly in the category of culture, the authors of DSM-5 have established their solution to the age-old debate concerning the significance of religion/spirituality in clinical practice. Even though, DSM-5 offers an expanded understanding of culture and its impact on diagnosis, the V-code 62.89 needs to be improved taking into account some limitations of DSM classification. The development of DSM, from its fourth edition, brought a change into the approach towards religion and spirituality in the context of clinical diagnosis. Introducing V-code 62.89 has increased the possibility of differential diagnosis between religion/spirituality and health/psychopathology. The emphasis on manifestation of cultural diversity has enabled non-reductive and non-pathologising insight into the problems of religious and spirituality. On the other hand, medicalisation and psychiatrisation of various existential problems, which can be seen in subsequent editions of the DSM, encourages pathologising approach towards religious or spiritual problems. Clinical look at religion and spirituality should therefore go beyond the limitations of DSM.

  12. Measurement of track structure parameters of low and medium energy helium and carbon ions in nanometric volumes

    NASA Astrophysics Data System (ADS)

    Hilgers, G.; Bug, M. U.; Rabus, H.

    2017-10-01

    Ionization cluster size distributions produced in the sensitive volume of an ion-counting wall-less nanodosimeter by monoenergetic carbon ions with energies between 45 MeV and 150 MeV were measured at the TANDEM-ALPI ion accelerator facility complex of the LNL-INFN in Legnaro. Those produced by monoenergetic helium ions with energies between 2 MeV and 20 MeV were measured at the accelerator facilities of PTB and with a 241Am alpha particle source. C3H8 was used as the target gas. The ionization cluster size distributions were measured in narrow beam geometry with the primary beam passing the target volume at specified distances from its centre, and in broad beam geometry with a fan-like primary beam. By applying a suitable drift time window, the effective size of the target volume was adjusted to match the size of a DNA segment. The measured data were compared with the results of simulations obtained with the PTB Monte Carlo code PTra. Before the comparison, the simulated cluster size distributions were corrected with respect to the background of additional ionizations produced in the transport system of the ionized target gas molecules. Measured and simulated characteristics of the particle track structure are in good agreement for both types of primary particles and for both types of the irradiation geometry. As the range in tissue of the ions investigated is within the typical extension of a spread-out Bragg peak, these data are useful for benchmarking not only ‘general purpose’ track structure simulation codes, but also treatment planning codes used in hadron therapy. Additionally, these data sets may serve as a data base for codes modelling the induction of radiation damages at the DNA-level as they almost completely characterize the ionization component of the nanometric track structure.

  13. Assessment of polarization effect on aerosol retrievals from MODIS

    NASA Astrophysics Data System (ADS)

    Korkin, S.; Lyapustin, A.

    2010-12-01

    Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).

  14. Safety Criticality Standards Using the French CRISTAL Code Package: Application to the AREVA NP UO{sub 2} Fuel Fabrication Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doucet, M.; Durant Terrasson, L.; Mouton, J.

    2006-07-01

    Criticality safety evaluations implement requirements to proof of sufficient sub critical margins outside of the reactor environment for example in fuel fabrication plants. Basic criticality data (i.e., criticality standards) are used in the determination of sub critical margins for all processes involving plutonium or enriched uranium. There are several criticality international standards, e.g., ARH-600, which is one the US nuclear industry relies on. The French Nuclear Safety Authority (DGSNR and its advising body IRSN) has requested AREVA NP to review the criticality standards used for the evaluation of its Low Enriched Uranium fuel fabrication plants with CRISTAL V0, the recentlymore » updated French criticality evaluation package. Criticality safety is a concern for every phase of the fabrication process including UF{sub 6} cylinder storage, UF{sub 6}-UO{sub 2} conversion, powder storage, pelletizing, rod loading, assembly fabrication, and assembly transportation. Until 2003, the accepted criticality standards were based on the French CEA work performed in the late seventies with the APOLLO1 cell/assembly computer code. APOLLO1 is a spectral code, used for evaluating the basic characteristics of fuel assemblies for reactor physics applications, which has been enhanced to perform criticality safety calculations. Throughout the years, CRISTAL, starting with APOLLO1 and MORET 3 (a 3D Monte Carlo code), has been improved to account for the growth of its qualification database and for increasing user requirements. Today, CRISTAL V0 is an up-to-date computational tool incorporating a modern basic microscopic cross section set based on JEF2.2 and the comprehensive APOLLO2 and MORET 4 codes. APOLLO2 is well suited for criticality standards calculations as it includes a sophisticated self shielding approach, a P{sub ij} flux determination, and a 1D transport (S{sub n}) process. CRISTAL V0 is the result of more than five years of development work focusing on theoretical approaches and the implementation of user-friendly graphical interfaces. Due to its comprehensive physical simulation and thanks to its broad qualification database with more than a thousand benchmark/calculation comparisons, CRISTAL V0 provides outstanding and reliable accuracy for criticality evaluations for configurations covering the entire fuel cycle (i.e. from enrichment, pellet/assembly fabrication, transportation, to fuel reprocessing). After a brief description of the calculation scheme and the physics algorithms used in this code package, results for the various fissile media encountered in a UO{sub 2} fuel fabrication plant will be detailed and discussed. (authors)« less

  15. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less

  16. Estimation of Airborne Radioactivity Induced by 8-GeV-Class Electron LINAC Accelerator.

    PubMed

    Asano, Yoshihiro

    2017-10-01

    Airborne radioactivity induced by high-energy electrons from 6 to 10 GeV is estimated by using analytical methods and the Monte Carlo codes PHITS and FLUKA. Measurements using a gas monitor with a NaI(Tl) scintillator are carried out in air from a dump room at SACLA, an x-ray free-electron laser facility with 7.8-GeV electrons and are compared to the simulations.

  17. College and University Speech Codes in the Aftermath of R.A.V v. City of St. Paul.

    ERIC Educational Resources Information Center

    Fraleigh, Douglas

    In the case of RAV v. City of St. Paul, a teenager was charged with violating the city's Bias-Motivated Crime Ordinance after being accused of burning a cross inside the fenced yard of a black family. In a 9-0 decision, the Supreme Court struck down the St. Paul ordinance, a decision which raised a question as to whether many college and…

  18. COMPTEL neutron response at 17 MeV

    NASA Technical Reports Server (NTRS)

    Oneill, Terrence J.; Ait-Ouamer, Farid; Morris, Joann; Tumer, O. Tumay; White, R. Stephen; Zych, Allen D.

    1992-01-01

    The Compton imaging telescope (COMPTEL) instrument of the Gamma Ray Observatory was exposed to 17 MeV d,t neutrons prior to launch. These data were analyzed and compared with Monte Carlo calculations using the MCNP(LANL) code. Energy and angular resolutions are compared and absolute efficiencies are calculated at 0 and 30 degrees incident angle. The COMPTEL neutron responses at 17 MeV and higher energies are needed to understand solar flare neutron data.

  19. Description and Evaluation of GDEM-V 3.0

    DTIC Science & Technology

    2009-02-06

    Description and Evaluation of GDEM -V 3.0 Michael R. caRnes Ocean Sciences Branch Oceanography Division February 6, 2009 i REPORT DOCUMENTATION PAGE Form...include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Description and Evaluation of GDEM -V 3.0 Michael R. Carnes...unlimited. Unclassified Unclassified Unclassified UL 24 Michael R. Carnes (228) 688-5648 The GDEM (Generalized Digital Environment Model) has served as

  20. The Shock and Vibration Digest, Volume 17, Number 10

    DTIC Science & Technology

    1985-10-01

    Venkayya, V.B. and Tischler, V.A., 49. Calico , R.A., Jr. and Tnyfault, D.V., "Frequency Control and the Effect on the "Decoupled Large Space Structure...Hurwitz presented. The threshold concept is de- Numerical Structural Mechanics scribed, as are receiver operating charac- Branch (Code 1844 ) teristic...Part Vibration and Dynamics of Off Road Vehi- 2 - Realistic Complex Elements des M. Apetaur I.A. Craighead, P.R. Brown Prague Univ. of Tech

  1. Spacecraft Charging Standard Report.

    DTIC Science & Technology

    1980-09-30

    SSPM include: SAMPLE POTENTIAL (with respect to S/C ground) Aluminized Kapton -2.0 kV Silvered Teflon -4.0 kV Astroquartz -3.7 kV 50.3 Analysis. As...and potential gradients on the space vehicle (candidate spacecraft locations for ESD tests) (The NASCAP computer code, when validated, will be useful...The coupling analysis should then determine as a minimum: I. electromagnetic fields generated interior to the space vehicle due to ESD 2. induced

  2. Software Support Measurement and Estimating for Oracle Database Applications Using Mark II Function Points

    DTIC Science & Technology

    1992-12-01

    36 V.33. Coe ncint of De minstioi ........................ 37 V3A. F-Raio .................................... 37 V3.5... de ations. Instructions ae defined as lines of code or card images. Thus, a line containin two or mome souce statements counts as one instruction; a...understand the productivity paradox, recall de concept of virtual machines. When a higher level machine groups ogether many instructm of a lower level

  3. Four year-olds use norm-based coding for face identity.

    PubMed

    Jeffery, Linda; Read, Ainsley; Rhodes, Gillian

    2013-05-01

    Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged children also use norm-based coding. We reasoned that the transition to school could be critical in developing a norm-based system because school places new demands on children's face identification skills and substantially increases experience with faces. Consistent with this view, face identification performance improves steeply between ages 4 and 7. We used face identity aftereffects to test whether norm-based coding emerges between these ages. We found that 4 year-old children, like adults, showed larger face identity aftereffects for adaptors far from the average than for adaptors closer to the average, consistent with use of norm-based coding. We conclude that experience prior to age 4 is sufficient to develop a norm-based face-space and that failure to use norm-based coding cannot explain 4 year-old children's poor face identification skills. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Receiver DCB estimation and GPS vTEC study at a low latitude station in the South Pacific

    NASA Astrophysics Data System (ADS)

    Prasad, Ramendra; Kumar, Sushil; Jayachandran, P. T.

    2016-11-01

    The statistical estimation of receiver differential code bias (DCB) of the GSV4004B receiver at a low latitude station, Suva (lat. 18.15°S, long. 178.45°E, Geomag. Lat. 21.07°S), Fiji, and the subsequent behaviour of vTEC, are presented. By means of least squares linear regression fitting technique, the receiver DCB was determined using the GPS vTEC data recorded during the year 2010, CODE TEC and IRI-2012 model for 2010. To substantiate the results, minimization of the standard deviation (SD) method was also used for GPS vTEC data. The overall monthly DCB was estimated to be in the range of 62.6 TECU. The vTEC after removing the resultant monthly DCB was consistent with other low latitude observations. The GPS vTEC 2010 data after eliminating the resultant DCB were lower in comparison to Faraday rotation vTEC measurements at Suva during 1984 primarily due to higher solar activity during 1984 as compared to 2010. Seasonally, vTEC was maximum during summer and minimum during winter. The winter showed least vTEC variability whereas equinox showed the largest daytime variability. The geomagnetic disturbances effect showed that both vTEC and its variability were higher on magnetically disturbed days as compared to quiet days with maximum variability in the daytime. Two geomagnetic storms of moderate strengths with main phases in the local daytime showed long duration (∼52 h) increase in vTEC by 33-67% which can be accounted by changes in E×B drifts due to prompt penetration of storm-time auroral electric field in the daytime and disturbance dynamo electric field in the nighttime to low latitudes.

  5. Tidal and Lunar Data for Point Mugu, San Nicolas Island, and the Barking Sands Area During 1988.

    DTIC Science & Technology

    1987-12-31

    C l vNvN-A -0%Ow’!’N N r--Me’a In v-(v0r-v inav -f -X W WI- 0 -!)0 N~o CD NI- 0V NMQ-N o ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ V KU LMM tt-y d tL) J O’ed.ci I o -a NM...Commander Third Fleet 1 Pearl Harbor, HI 96860-7500 Commander Naval Weapons Center Attn: Earth and Planetary Sciences Division 1 Code 343 (Technical...Center P.O. Box 271 Attn: Fishery-Oceanographic Group La Jolla, CA 92037 University of California Department of Biological Sciences Attn: Dr. A. M

  6. AeroDyn V15.04: Design tool for wind and MHK turbines

    DOE Data Explorer

    Murray, Robynne; Hayman, Greg; Jonkman, Jason

    2017-04-28

    AeroDyn is a time-domain wind and MHK turbine aerodynamics module that can be coupled into the FAST version 8 multi-physics engineering tool to enable aero-elastic simulation of horizontal-axis wind turbines. AeroDyn V15.04 has been updated to include a cavitation check for MHK turbines, and can be driven as a standalone code to compute wind turbine aerodynamic response uncoupled from FAST. Note that while AeroDyn has been updated to v15.04, FAST v8.16 has not yet been updated and still uses AeroDyn v15.03.

  7. Speech Recognition: Proceedings of a Workshop Held in San Diego, California on March 24-26, 1987

    DTIC Science & Technology

    1987-03-01

    following count as single "words": HONG-KONG, SAN-DIEGO, ICE-NINE, PAC-ALERT, LAT -LON, FUGET- 1 , M -RATING, C- CODE, SQO-23, etc. However, BQ’ING...baseline isolsted-aord HHM systsa sre depicted in Tig. 1 , «hile Pig. 2 indicates ths robustness snhsnceaents «hich hsvs been developed end tsstsd...the United States Government. .>\\S .^Vv .vW 87 8 A3 023 £ 1 -•.v.v.-.." v v ■-•’.- ".• •.- •. • , • «* * • ■ "^ • i TADI.K OP

  8. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    NASA Astrophysics Data System (ADS)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.

  9. Assessment of WWMCCS Performance in a Post-Nuclear Attack Environment. Sanitized

    DTIC Science & Technology

    1975-11-01

    34cONTRACT No. 001-75-C-0217 * 1 THIS WORK SPONSOQRED BY THE DEFENSE NUCLEAR AGENC" UNDER RDT&E RMSS CODE X36075669 Q94OAXDDO01O1 H2590D. Pr p r d 1 rTHIS DOCUM...8217 < : " ! , . .I/• I V...... .... . . . . . . 1 "- 7 . . . • .tEPORT DOCJMENTATION PAGE "rAr, . Irt’. ±~ATSSIS9.C.7NF VMC EPPINEI 21 Jan 75-31 Oc- 75 IOST-I.CLLEA...U .. 4II4 .i %Iy•11 .4 @$ . .~ *󈧯.I . 1 . S $P.IL%-V ka"I 1 "- This work sponsored by the Defe.xse Nuclear Agency under %DT&E WISS Code X360075469 Q9

  10. Nuclear Reaction Models Responsible for Simulation of Neutron-induced Soft Errors in Microelectronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.

    Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less

  11. Experimental verification of bremsstrahlung production and dosimetry predictions for 15.5 MeV electrons

    NASA Astrophysics Data System (ADS)

    Sanford, T. W. L.; Beutler, D. E.; Halbleib, J. A.; Knott, D. P.

    1991-12-01

    The radiation produced by a 15.5-MeV monoenergetic electron beam incident on optimized and nonoptimized bremsstrahlung targets is characterized using the ITS Monte Carlo code and measurements with equilibrated and nonequilibrated TLD dosimetry. Comparisons between calculations and measurements verify the calculations and demonstrate that the code can be used to predict both bremsstrahlung production and TLD response for radiation fields that are characteristic of those produced by pulsed simulators of gamma rays. The comparisons provide independent confirmation of the validity of the TLD calibration for photon fields characteristic of gamma-ray simulators. The empirical Martin equation, which is often used to calculate radiation dose from optimized bremsstrahlung targets, is examined, and its range of validity is established.

  12. Computed secondary-particle energy spectra following nonelastic neutron interactions with sup 12 C for E sub n between 15 and 60 MeV: Comparisons of results from two calculational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickens, J.K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d{sigma}/dE, following nonelastic neutron interactions with {sup 12}C for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed. 16 refs., 44 figs., 2 tabs.

  13. GRADSPMHD: A parallel MHD code based on the SPH formalism

    NASA Astrophysics Data System (ADS)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.

  14. A Case Study of 4 & 5 Cost Effectiveness

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.; McCaugherty, Dan; Joshi, Tulasi; Callahan, John

    1997-01-01

    This paper looks at the Independent Verification and Validation (IV&V) of NASA's Space Shuttle Day of Launch I-Load Update (DoLILU) project. IV&V is defined. The system's development life cycle is explained. Data collection and analysis are described. DoLILU Issue Tracking Reports (DITRs) authored by IV&V personnel are analyzed to determine the effectiveness of IV&V in finding errors before the code, testing, and integration phase of the software development life cycle. The study's findings are reported along with the limitations of the study and planned future research.

  15. Knowledge based system verification and validation as related to automation of space station subsystems: Rationale for a knowledge based system lifecycle

    NASA Technical Reports Server (NTRS)

    Richardson, Keith; Wong, Carla

    1988-01-01

    The role of verification and validation (V and V) in software has been to support and strengthen the software lifecycle and to ensure that the resultant code meets the standards of the requirements documents. Knowledge Based System (KBS) V and V should serve the same role, but the KBS lifecycle is ill-defined. The rationale of a simple form of the KBS lifecycle is presented, including accommodation to certain critical KBS differences from software development.

  16. Effects of isoflurane anesthesia on ensemble patterns of Ca2+ activity in mouse v1: reduced direction selectivity independent of increased correlations in cellular activity.

    PubMed

    Goltstein, Pieter M; Montijn, Jorrit S; Pennartz, Cyriel M A

    2015-01-01

    Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to 'break' the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity.

  17. Inner Radiation Belt Representation of the Energetic Electron Environment: Model and Data Synthesis Using the Salammbo Radiation Belt Transport Code and Los Alamos Geosynchronous and GPS Energetic Particle Data

    NASA Technical Reports Server (NTRS)

    Friedel, R. H. W.; Bourdarie, S.; Fennell, J.; Kanekal, S.; Cayton, T. E.

    2004-01-01

    The highly energetic electron environment in the inner magnetosphere (GEO inward) has received a lot of research attention in resent years, as the dynamics of relativistic electron acceleration and transport are not yet fully understood. These electrons can cause deep dielectric charging in any space hardware in the MEO to GEO region. We use a new and novel approach to obtain a global representation of the inner magnetospheric energetic electron environment, which can reproduce the absolute environment (flux) for any spacecraft orbit in that region to within a factor of 2 for the energy range of 100 KeV to 5 MeV electrons, for any levels of magnetospheric activity. We combine the extensive set of inner magnetospheric energetic electron observations available at Los Alamos with the physics based Salammbo transport code, using the data assimilation technique of "nudging". This in effect input in-situ data into the code and allows the diffusion mechanisms in the code to interpolate the data into regions and times of no data availability. We present here details of the methods used, both in the data assimilation process and in the necessary inter-calibration of the input data used. We will present sample runs of the model/data code and compare the results to test spacecraft data not used in the data assimilation process.

  18. Effects of Isoflurane Anesthesia on Ensemble Patterns of Ca2+ Activity in Mouse V1: Reduced Direction Selectivity Independent of Increased Correlations in Cellular Activity

    PubMed Central

    Goltstein, Pieter M.; Montijn, Jorrit S.; Pennartz, Cyriel M. A.

    2015-01-01

    Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to ‘break’ the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity. PMID:25706867

  19. Cognitive CDMA Channelization

    DTIC Science & Technology

    2010-03-01

    proposed scheme for power and code allocation for the secondary user is outlined in Fig. 2. V. SIMULATION STUDIES We consider a primary DS - CDMA system...DATES COVERED (From - To) January 2008 – June 2009 4. TITLE AND SUBTITLE COGNITIVE CDMA CHANNELIZATION 5a. CONTRACT NUMBER In-House 5b. GRANT...TELEPHONE NUMBER (Include area code) N/A Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Cognitive CDMA Channelization Kanke

  20. A multiblock/multizone code (PAB 3D-v2) for the three-dimensional Navier-Stokes equations: Preliminary applications

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    1990-01-01

    The development and applications of multiblock/multizone and adaptive grid methodologies for solving the three-dimensional simplified Navier-Stokes equations are described. Adaptive grid and multiblock/multizone approaches are introduced and applied to external and internal flow problems. These new implementations increase the capabilities and flexibility of the PAB3D code in solving flow problems associated with complex geometry.

  1. Relativistic Klystron Amplifiers Driven by Modulated Intense Relativistic Electron Beams

    DTIC Science & Technology

    1990-04-11

    electrical parameters of the cavity were calculated using the SUPERFISH computer code. We found: (1) that the gap voltage, V was half as high as the...SUPERFISH computer code and experimenting with various cavities we found the best cavity geometry that fulfilled the above conditions. For this cavity...paths. Experiments along this line are being planned (T. Godlove and F. Mako, private communciation ). A somewhat different concept which also

  2. Steady State Film Boiling Heat Transfer Simulated With Trace V4.160

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audrius Jasiulevicius; Rafael Macian-Juan

    2006-07-01

    This paper presents the results of the assessment and analysis of TRACE v4.160 heat transfer predictions in the post-CHF (critical heat flux) region and discusses the possibilities to improve the TRACE v4.160 code predictions in the film boiling heat transfer when applying different film boiling correlations. For this purpose, the TRACE v4.160-calculated film boiling heat flux and the resulting maximum inner wall temperatures during film boiling in single tubes were compared with experimental data obtained at the Royal Institute of Technology (KTH) in Stockholm, Sweden. The experimental database included measurements for pressures ranging from 30 to 200 bar and coolantmore » mass fluxes from 500 to 3000 kg/m{sup 2}s. It was found that TRACE v4.160 does not produce correct predictions of the film boiling heat flux, and consequently of the maximum inner wall temperature in the test section, under the wide range of conditions documented in the KTH experiments. In particular, it was found that the standard TRACE v4.160 under-predicts the film boiling heat transfer coefficient at low pressure-low mass flux and high pressure-high mass flux conditions. For most of the rest of the investigated range of parameters, TRACE v4.160 over-predicts the film boiling heat transfer coefficient, which can lead to non-conservative predictions in applications to nuclear power plant analyses. Since no satisfactory agreement with the experimental database was obtained with the standard TRACE v4.160 film boiling heat transfer correlations, we have added seven film boiling correlations to TRACE v4.160 in order to investigate the possibility to improve the code predictions for the conditions similar to the KTH tests. The film boiling correlations were selected among the most commonly used film boiling correlations found in the open literature, namely Groeneveld 5.7, Bishop (2 correlations), Tong, Konkov, Miropolskii and Groeneveld-Delorme correlations. The only correlation among the investigated, which resulted in a significant improvement of TRACE predictions, was the Groeneveld 5.7. It was found, that replacing the current film boiling correlation (Dougall-Rohsenow) for the wall-togas heat transfer with Groeneveld 5.7 improves the code predictions for the film boiling heat transfer at high qualities in single tubes in the entire range of pressure and coolant mass flux considered. (authors)« less

  3. 77 FR 69572 - Special Conditions: Embraer S.A., Model EMB-550 Airplanes; Flight Envelope Protection: High Speed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-20

    ... Envelope Protection: High Speed Limiting AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Notice... inadvertently or intentionally exceeding a speed approximately equivalent to V FC or attaining V DF . Current Title 14 Code of Federal Regulations (14 CFR) part 25 do not relate to a high speed limiter that might...

  4. Corporate Social Responsibility: A Cross Sectional Examination of Incentivization.

    DTIC Science & Technology

    1995-09-01

    which address organizational behavior: Corporate Social Responsibility ( CSR ), Expense Preference Approach (EPA), Resource Dependency Theory (RDT...i V *>V CORPORATE SOCIAL RESPONSIBILITY : A CROSS SECTIONAL EXAMINATION OF INCENTIVIZATION THESIS Jennifer A. Block, B.S. First Lieutenant, USAF...Distribution/ Availability Codes Dist m Avail and/or Special \\&\\W 0\\1 CORPORATE SOCIAL RESPONSIBILITY : A CROSS SECTIONAL EXAMINATION OF

  5. The evolution of the genetic code: Impasses and challenges.

    PubMed

    Kun, Ádám; Radványi, Ádám

    2018-02-01

    The origin of the genetic code and translation is a "notoriously difficult problem". In this survey we present a list of questions that a full theory of the genetic code needs to answer. We assess the leading hypotheses according to these criteria. The stereochemical, the coding coenzyme handle, the coevolution, the four-column theory, the error minimization and the frozen accident hypotheses are discussed. The integration of these hypotheses can account for the origin of the genetic code. But experiments are badly needed. Thus we suggest a host of experiments that could (in)validate some of the models. We focus especially on the coding coenzyme handle hypothesis (CCH). The CCH suggests that amino acids attached to RNA handles enhanced catalytic activities of ribozymes. Alternatively, amino acids without handles or with a handle consisting of a single adenine, like in contemporary coenzymes could have been employed. All three scenarios can be tested in in vitro compartmentalized systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.

  7. A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories

    NASA Astrophysics Data System (ADS)

    Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon

    BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.

  8. 5D Tempest simulations of kinetic edge turbulence

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.; Xiong, Z.; Cohen, B. I.; Cohen, R. H.; Dorr, M. R.; Hittinger, J. A.; Kerbel, G. D.; Nevins, W. M.; Rognlien, T. D.; Umansky, M. V.; Qin, H.

    2006-10-01

    Results are presented from the development and application of TEMPEST, a nonlinear five dimensional (3d2v) gyrokinetic continuum code. The simulation results and theoretical analysis include studies of H-mode edge plasma neoclassical transport and turbulence in real divertor geometry and its relationship to plasma flow generation with zero external momentum input, including the important orbit-squeezing effect due to the large electric field flow-shear in the edge. In order to extend the code to 5D, we have formulated a set of fully nonlinear electrostatic gyrokinetic equations and a fully nonlinear gyrokinetic Poisson's equation which is valid for both neoclassical and turbulence simulations. Our 5D gyrokinetic code is built on 4D version of Tempest neoclassical code with extension to a fifth dimension in binormal direction. The code is able to simulate either a full torus or a toroidal segment. Progress on performing 5D turbulence simulations will be reported.

  9. Elan4/SPARC V9 Cross Loader and Dynamic Linker

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    anf Fabien Lebaillif-Delamare, Fabrizio Petrini

    2004-10-25

    The Elan4/Sparc V9 Croos Loader and Liner is a part of the Linux system software that allows the dynamic loading and linking of user code in the network interface Quadrics QsNETII, also called as Elan4 Quadrics. Elan44 uses a thread processor that is based on the assembly instruction set of the Sparc V9. All this software is integrated as a Linux kernel module in the Linux 2.6.5 release.

  10. Validation of Filtration Skid During Land-Based & Shipboard Tests

    DTIC Science & Technology

    2012-10-12

    b. ABSTRACT c. THIS PAGE 19b. TELEPHONE NUMBER (include area code) Standard Form 298 (Re . 8-98) v Prescribed by ANSI Std. Z39.18 12...skid device that it had previously developed. A prototype unit was developed and deployed on the bulk carrier M/ V Indiana Harbor, and commissioning...a preliminary design of a filter skid device that it had previously developed. A prototype unit was developed and deployed on the bulk carrier M/ V

  11. SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, S; Kaye, W; Jaworski, J

    2015-06-15

    Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinholemore » camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for various applications worldwide, including proton therapy imaging R&D.« less

  12. Human V4 Activity Patterns Predict Behavioral Performance in Imagery of Object Color.

    PubMed

    Bannert, Michael M; Bartels, Andreas

    2018-04-11

    Color is special among basic visual features in that it can form a defining part of objects that are engrained in our memory. Whereas most neuroimaging research on human color vision has focused on responses related to external stimulation, the present study investigated how sensory-driven color vision is linked to subjective color perception induced by object imagery. We recorded fMRI activity in male and female volunteers during viewing of abstract color stimuli that were red, green, or yellow in half of the runs. In the other half we asked them to produce mental images of colored, meaningful objects (such as tomato, grapes, banana) corresponding to the same three color categories. Although physically presented color could be decoded from all retinotopically mapped visual areas, only hV4 allowed predicting colors of imagined objects when classifiers were trained on responses to physical colors. Importantly, only neural signal in hV4 was predictive of behavioral performance in the color judgment task on a trial-by-trial basis. The commonality between neural representations of sensory-driven and imagined object color and the behavioral link to neural representations in hV4 identifies area hV4 as a perceptual hub linking externally triggered color vision with color in self-generated object imagery. SIGNIFICANCE STATEMENT Humans experience color not only when visually exploring the outside world, but also in the absence of visual input, for example when remembering, dreaming, and during imagery. It is not known where neural codes for sensory-driven and internally generated hue converge. In the current study we evoked matching subjective color percepts, one driven by physically presented color stimuli, the other by internally generated color imagery. This allowed us to identify area hV4 as the only site where neural codes of corresponding subjective color perception converged regardless of its origin. Color codes in hV4 also predicted behavioral performance in an imagery task, suggesting it forms a perceptual hub for color perception. Copyright © 2018 the authors 0270-6474/18/383657-12$15.00/0.

  13. Implementing displacement damage calculations for electrons and gamma rays in the Particle and Heavy-Ion Transport code System

    NASA Astrophysics Data System (ADS)

    Iwamoto, Yosuke

    2018-03-01

    In this study, the Monte Carlo displacement damage calculation method in the Particle and Heavy-Ion Transport code System (PHITS) was improved to calculate displacements per atom (DPA) values due to irradiation by electrons (or positrons) and gamma rays. For the damage due to electrons and gamma rays, PHITS simulates electromagnetic cascades using the Electron Gamma Shower version 5 (EGS5) algorithm and calculates DPA values using the recoil energies and the McKinley-Feshbach cross section. A comparison of DPA values calculated by PHITS and the Monte Carlo assisted Classical Method (MCCM) reveals that they were in good agreement for gamma-ray irradiations of silicon and iron at energies that were less than 10 MeV. Above 10 MeV, PHITS can calculate DPA values not only for electrons but also for charged particles produced by photonuclear reactions. In DPA depth distributions under electron and gamma-ray irradiations, build-up effects can be observed near the target's surface. For irradiation of 90-cm-thick carbon by protons with energies of more than 30 GeV, the ratio of the secondary electron DPA values to the total DPA values is more than 10% and increases with an increase in incident energy. In summary, PHITS can calculate DPA values for all particles and materials over a wide energy range between 1 keV and 1 TeV for electrons, gamma rays, and charged particles and between 10-5 eV and 1 TeV for neutrons.

  14. TOUGH+HYDRATE v1.2 User's Manual: A Code for the Simulation of System Behavior in Hydrate-Bearing Geologic Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, George J.; Kowalsky, Michael B.; Pruess, Karsten

    TOUGH+HYDRATE v1.2 is a code for the simulation of the behavior of hydratebearing geologic systems, and represents the second update of the code since its first release [Moridis et al., 2008]. By solving the coupled equations of mass and heat balance, TOUGH+HYDRATE can model the non-isothermal gas release, phase behavior and flow of fluids and heat under conditions typical of common natural CH4-hydrate deposits (i.e., in the permafrost and in deep ocean sediments) in complex geological media at any scale (from laboratory to reservoir) at which Darcy’s law is valid. TOUGH+HYDRATE v1.2 includes both an equilibrium and a kinetic modelmore » of hydrate formation and dissociation. The model accounts for heat and up to four mass components, i.e., water, CH4, hydrate, and water-soluble inhibitors such as salts or alcohols. These are partitioned among four possible phases (gas phase, liquid phase, ice phase and hydrate phase). Hydrate dissociation or formation, phase changes and the corresponding thermal effects are fully described, as are the effects of inhibitors. The model can describe all possible hydrate dissociation mechanisms, i.e., depressurization, thermal stimulation, salting-out effects and inhibitor-induced effects. TOUGH+HYDRATE is a member of TOUGH+, the successor to the TOUGH2 [Pruess et al., 1991] family of codes for multi-component, multiphase fluid and heat flow developed at the Lawrence Berkeley National Laboratory. It is written in standard FORTRAN 95/2003, and can be run on any computational platform (workstation, PC, Macintosh) for which such compilers are available.« less

  15. Progress Report on Alloy 617 Notched Specimen Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMurtrey, Michael David; Wright, Richard Neil; Lillo, Thomas Martin

    Creep behavior of Alloy 617 has been extensively characterized to support the development of a draft Code Case to qualify Alloy 617 in Section III division 5 of the ASME Boiler and Pressure Vessel Code. This will allow use of Alloy 617 in construction of nuclear reactor components at elevated temperatures and longer periods of time (up to 950°C and 100,000 hours). Prior to actual use, additional concerns not considered in the ASME code need to be addressed. Code Cases are based largely on uniaxial testing of smooth gage specimens. In service conditions, components will generally be under multi axialmore » loading. There is also the concern of the behavior at discontinuities, such as threaded components. To address the concerns of multi axial creep behavior and at geometric discontinuities, notched specimens have been designed to create conditions representative of the states that service components experience. Two general notch geometries have been used for these series of tests: U notch and V notch specimens. The notches produce a tri axial stress state, though not uniform across the specimen. Characterization of the creep behavior of the U notch specimens and the creep rupture behavior of the V notch specimens provides a good approximation of the behavior expected of actual components. Preliminary testing and analysis have been completed and are reported in this document. This includes results from V notch specimens tested at 900°C and 800°C. Failure occurred in the smooth gage section of the specimen rather than at the root of the notch, though some damage was present at the root of the notch, where initial stress was highest. This indicates notch strengthening behavior in this material at these temperatures.« less

  16. 24 CFR 905.10 - Capital Fund formula (CFF).

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... projects; (ii) Vacancy reduction; (iii) Addressing deferred maintenance needs and the replacement of obsolete utility systems and dwelling equipment; (iv) Planned code compliance; (v) Management improvements...

  17. 24 CFR 905.10 - Capital Fund formula (CFF).

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... projects; (ii) Vacancy reduction; (iii) Addressing deferred maintenance needs and the replacement of obsolete utility systems and dwelling equipment; (iv) Planned code compliance; (v) Management improvements...

  18. Ube2V2 Is a Rosetta Stone Bridging Redox and Ubiquitin Codes, Coordinating DNA Damage Responses.

    PubMed

    Zhao, Yi; Long, Marcus J C; Wang, Yiran; Zhang, Sheng; Aye, Yimon

    2018-02-28

    Posttranslational modifications (PTMs) are the lingua franca of cellular communication. Most PTMs are enzyme-orchestrated. However, the reemergence of electrophilic drugs has ushered mining of unconventional/non-enzyme-catalyzed electrophile-signaling pathways. Despite the latest impetus toward harnessing kinetically and functionally privileged cysteines for electrophilic drug design, identifying these sensors remains challenging. Herein, we designed "G-REX"-a technique that allows controlled release of reactive electrophiles in vivo. Mitigating toxicity/off-target effects associated with uncontrolled bolus exposure, G-REX tagged first-responding innate cysteines that bind electrophiles under true k cat / K m conditions. G-REX identified two allosteric ubiquitin-conjugating proteins-Ube2V1/Ube2V2-sharing a novel privileged-sensor-cysteine. This non-enzyme-catalyzed-PTM triggered responses specific to each protein. Thus, G-REX is an unbiased method to identify novel functional cysteines. Contrasting conventional active-site/off-active-site cysteine-modifications that regulate target activity, modification of Ube2V2 allosterically hyperactivated its enzymatically active binding-partner Ube2N, promoting K63-linked client ubiquitination and stimulating H2AX-dependent DNA damage response. This work establishes Ube2V2 as a Rosetta-stone bridging redox and ubiquitin codes to guard genome integrity.

  19. Automated generation of lattice QCD Feynman rules

    NASA Astrophysics Data System (ADS)

    Hart, A.; von Hippel, G. M.; Horgan, R. R.; Müller, E. H.

    2009-12-01

    The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summaryProgram title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References:A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340-353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026. M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.

  20. Development and Implementation of Dynamic Scripts to Execute Cycled GSI/WRF Forecasts

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Xuanli; Watson, Leela

    2014-01-01

    The Weather Research and Forecasting (WRF) numerical weather prediction (NWP) model and Gridpoint Statistical Interpolation (GSI) data assimilation (DA) are the operational systems that make up the North American Mesoscale (NAM) model and the NAM Data Assimilation System (NDAS) analysis used by National Weather Service forecasters. The Developmental Testbed Center (DTC) manages and distributes the code for the WRF and GSI, but it is up to individual researchers to link the systems together and write scripts to run the systems, which can take considerable time for those not familiar with the code. The objective of this project is to develop and disseminate a set of dynamic scripts that mimic the unique cycling configuration of the operational NAM to enable researchers to develop new modeling and data assimilation techniques that can be easily transferred to operations. The current version of the SPoRT GSI/WRF Scripts (v3.0.1) is compatible with WRF v3.3 and GSI v3.0.

  1. Haplotype block structure study of the CFTR gene. Most variants are associated with the M470 allele in several European populations.

    PubMed

    Pompei, Fiorenza; Ciminelli, Bianca Maria; Bombieri, Cristina; Ciccacci, Cinzia; Koudova, Monika; Giorgi, Silvia; Belpinati, Francesca; Begnini, Angela; Cerny, Milos; Des Georges, Marie; Claustres, Mireille; Ferec, Claude; Macek, Milan; Modiano, Guido; Pignatti, Pier Franco

    2006-01-01

    An average of about 1700 CFTR (cystic fibrosis transmembrane conductance regulator) alleles from normal individuals from different European populations were extensively screened for DNA sequence variation. A total of 80 variants were observed: 61 coding SNSs (results already published), 13 noncoding SNSs, three STRs, two short deletions, and one nucleotide insertion. Eight DNA variants were classified as non-CF causing due to their high frequency of occurrence. Through this survey the CFTR has become the most exhaustively studied gene for its coding sequence variability and, though to a lesser extent, for its noncoding sequence variability as well. Interestingly, most variation was associated with the M470 allele, while the V470 allele showed an 'extended haplotype homozygosity' (EHH). These findings make us suggest a role for selection acting either on the M470V itself or through an hitchhiking mechanism involving a second site. The possible ancient origin of the V allele in an 'out of Africa' time frame is discussed.

  2. Prime Contract Awards Alphabetically by Contractor, by State or Country, and Place, FY 88. Part 6. (Data-Easton Machine Corporation)

    DTIC Science & Technology

    1988-01-01

    City, State, and ZIP Code) 7b. ADDRESS (City, State, and ZIP Code) Arlington, VA 22202-4302 8a. NAME OF FUNDING/SPONSORING 8b. OFFICE SYMBOL 9 ...to u 0 300- I cInr 1- IL71 O0 0 1= I v V- EU 10001- I 0 M fa V CE0 1 CI o 100 3(- 9 -4 3-. 3t-4 -4 CA4 4 -4 W-4 - .. 4’.-I-4 00.f.4 4-I 0-I I...I 00003’ 44-.4.44 r4 r0) 1- 00 U44a ( 9 WLAC’) o .,’r-r-- r-3=-.-r 00 r- i-. I’- 1-’o- 411Wa0(1 I M ( t1nC n S 0M U n . (n(C’In (1n n ( 0nL C) 0 M1 M

  3. Dynamic gene expression response to altered gravity in human T cells.

    PubMed

    Thiel, Cora S; Hauschild, Swantje; Huge, Andreas; Tauber, Svantje; Lauber, Beatrice A; Polzer, Jennifer; Paulsen, Katrin; Lier, Hartwin; Engelmann, Frank; Schmitz, Burkhard; Schütte, Andreas; Layer, Liliana E; Ullrich, Oliver

    2017-07-12

    We investigated the dynamics of immediate and initial gene expression response to different gravitational environments in human Jurkat T lymphocytic cells and compared expression profiles to identify potential gravity-regulated genes and adaptation processes. We used the Affymetrix GeneChip® Human Transcriptome Array 2.0 containing 44,699 protein coding genes and 22,829 non-protein coding genes and performed the experiments during a parabolic flight and a suborbital ballistic rocket mission to cross-validate gravity-regulated gene expression through independent research platforms and different sets of control experiments to exclude other factors than alteration of gravity. We found that gene expression in human T cells rapidly responded to altered gravity in the time frame of 20 s and 5 min. The initial response to microgravity involved mostly regulatory RNAs. We identified three gravity-regulated genes which could be cross-validated in both completely independent experiment missions: ATP6V1A/D, a vacuolar H + -ATPase (V-ATPase) responsible for acidification during bone resorption, IGHD3-3/IGHD3-10, diversity genes of the immunoglobulin heavy-chain locus participating in V(D)J recombination, and LINC00837, a long intergenic non-protein coding RNA. Due to the extensive and rapid alteration of gene expression associated with regulatory RNAs, we conclude that human cells are equipped with a robust and efficient adaptation potential when challenged with altered gravitational environments.

  4. Comparison of two equation-of-state models for partially ionized aluminum: Zel'dovich and Raizer's model versus the activity expansion code

    NASA Astrophysics Data System (ADS)

    Harrach, Robert J.; Rogers, Forest J.

    1981-09-01

    Two equation-of-state (EOS) models for multipy ionized matter are evaluated for the case of an aluminum plasma in the temperature range from about one eV to several hundred eV, spanning conditions of weak to strong ionization. Specifically, the simple analytical mode of Zel'dovich and Raizer and the more comprehensive model comprised by Rogers' plasma physics avtivity expansion code (ACTEX) are used to calculate the specific internal energy ɛ and average degree of ionization Z¯*, as functons of temperature T and density ρ. In the absence of experimental data, these results are compared against each other, covering almost five orders-of-magnitude variation in ɛ and the full range of Z¯* We find generally good agreement between the two sets of results, especially for low densities and for temperatures near the upper end of the rage. Calculated values of ɛ(T) agree to within ±30% over nearly the full range in T for densities below about 1 g/cm3. Similarly, the two models predict values of Z¯*(T) which track each other fairly well; above 20 eV the discrepancy is less than ±20% fpr ρ≲1 g/cm3. Where the calculations disagree, we expect the ACTEX code to be more accurate than Zel'dovich and Raizer's model, by virtue of its more detailed physics content.

  5. MARS15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mokhov, Nikolai

    MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Depriest, Kendall

    Unsuccessful attempts by members of the radiation effects community to independently derive the Norgett-Robinson-Torrens (NRT) damage energy factors for silicon in ASTM standard E722-14 led to an investigation of the software coding and data that produced those damage energy factors. The ad hoc collaboration to discover the reason for lack of agreement revealed a coding error and resulted in a report documenting the methodology to produce the response function for the standard. The recommended changes in the NRT damage energy factors for silicon are shown to have significant impact for a narrow energy region of the 1-MeV(Si) equivalent fluence responsemore » function. However, when evaluating integral metrics over all neutrons energies in various spectra important to the SNL electronics testing community, the change in the response results in a small decrease in the total 1- MeV(Si) equivalent fluence of ~0.6% compared to the E722-14 response. Response functions based on the newly recommended NRT damage energy factors have been produced and are available for users of both the NuGET and MCNP codes.« less

  7. Event Generators for Simulating Heavy Ion Interactions of Interest in Evaluating Risks in Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Wilson, Thomas L.; Pinsky, Lawrence; Andersen, Victor; Empl, Anton; Lee, Kerry; Smirmov, Georgi; Zapp, Neal; Ferrari, Alfredo; Tsoulou, Katerina; Roesler, Stefan; hide

    2005-01-01

    Simulating the Space Radiation environment with Monte Carlo Codes, such as FLUKA, requires the ability to model the interactions of heavy ions as they penetrate spacecraft and crew member's bodies. Monte-Carlo-type transport codes use total interaction cross sections to determine probabilistically when a particular type of interaction has occurred. Then, at that point, a distinct event generator is employed to determine separately the results of that interaction. The space radiation environment contains a full spectrum of radiation types, including relativistic nuclei, which are the most important component for the evaluation of crew doses. Interactions between incident protons with target nuclei in the spacecraft materials and crew member's bodies are well understood. However, the situation is substantially less comfortable for incident heavier nuclei (heavy ions). We have been engaged in developing several related heavy ion interaction models based on a Quantum Molecular Dynamics-type approach for energies up through about 5 GeV per nucleon (GeV/A) as part of a NASA Consortium that includes a parallel program of cross section measurements to guide and verify this code development.

  8. Probabilistic Analysis of Aircraft Gas Turbine Disk Life and Reliability

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Zaretsky, Erwin V.; August, Richard

    1999-01-01

    Two series of low cycle fatigue (LCF) test data for two groups of different aircraft gas turbine engine compressor disk geometries were reanalyzed and compared using Weibull statistics. Both groups of disks were manufactured from titanium (Ti-6Al-4V) alloy. A NASA Glenn Research Center developed probabilistic computer code Probable Cause was used to predict disk life and reliability. A material-life factor A was determined for titanium (Ti-6Al-4V) alloy based upon fatigue disk data and successfully applied to predict the life of the disks as a function of speed. A comparison was made with the currently used life prediction method based upon crack growth rate. Applying an endurance limit to the computer code did not significantly affect the predicted lives under engine operating conditions. Failure location prediction correlates with those experimentally observed in the LCF tests. A reasonable correlation was obtained between the predicted disk lives using the Probable Cause code and a modified crack growth method for life prediction. Both methods slightly overpredict life for one disk group and significantly under predict it for the other.

  9. Application of Fast Multipole Methods to the NASA Fast Scattering Code

    NASA Technical Reports Server (NTRS)

    Dunn, Mark H.; Tinetti, Ana F.

    2008-01-01

    The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.

  10. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE PAGES

    Xia, Yidong; Wang, Chuanjin; Luo, Hong; ...

    2015-12-15

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  11. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  12. A Monte Carlo simulation code for calculating damage and particle transport in solids: The case for electron-bombarded solids for electron energies up to 900 MeV

    NASA Astrophysics Data System (ADS)

    Yan, Qiang; Shao, Lin

    2017-03-01

    Current popular Monte Carlo simulation codes for simulating electron bombardment in solids focus primarily on electron trajectories, instead of electron-induced displacements. Here we report a Monte Carol simulation code, DEEPER (damage creation and particle transport in matter), developed for calculating 3-D distributions of displacements produced by electrons of incident energies up to 900 MeV. Electron elastic scattering is calculated by using full-Mott cross sections for high accuracy, and primary-knock-on-atoms (PKAs)-induced damage cascades are modeled using ZBL potential. We compare and show large differences in 3-D distributions of displacements and electrons in electron-irradiated Fe. The distributions of total displacements are similar to that of PKAs at low electron energies. But they are substantially different for higher energy electrons due to the shifting of PKA energy spectra towards higher energies. The study is important to evaluate electron-induced radiation damage, for the applications using high flux electron beams to intentionally introduce defects and using an electron analysis beam for microstructural characterization of nuclear materials.

  13. Beam dynamics simulation of HEBT for the SSC-linac injector

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Ni; Yuan, You-Jin; Xiao, Chen; He, Yuan; Wang, Zhi-Jun; Sheng, Li-Na

    2012-11-01

    The SSC-linac (a new injector for the Separated Sector Cyclotron) is being designed in the HIRFL (Heavy Ion Research Facility in Lanzhou) system to accelerate 238U34+ from 3.72 keV/u to 1.008 MeV/u. As a part of the SSC-linac injector, the HEBT (high energy beam transport) has been designed by using the TRACE-3D code and simulated by the 3D PIC (particle-in-cell) Track code. The total length of the HEBT is about 12 meters and a beam line of about 6 meters are shared with the exiting beam line of the HIRFL system. The simulation results show that the particles can be delivered efficiently in the HEBT and the particles at the exit of the HEBT well match the acceptance of the SSC for further acceleration. The dispersion is eliminated absolutely in the HEBT. The space-charge effect calculated by the Track code is inconspicuous. According to the simulation, more than 60 percent of the particles from the ion source can be transported into the acceptance of the SSC.

  14. Theoretical evaluation of a V/STOL fighter model utilizing the PAN AIR code

    NASA Technical Reports Server (NTRS)

    Howell, G. A.; Bhateley, I. C.

    1982-01-01

    The PAN AIR computer code was investigated as a tool for predicting closely coupled aerodynamic and propulsive flowfields of arbitrary configurations. The NASA/Ames V/STOL fighter model, a configuration of complex geometry, was analyzed with the PAN AIR code. A successful solution for this configuration was obtained when the nozzle exit was treated as an impermeable surface and no wakes were included around the nozzle exit. When separated flow was simulated from the end of the nacelle, requiring the use of wake networks emanating from the nozzle exit, a number of problems were encountered. A circular body nacelle model was used to investigate various techniques for simulating the exhaust plume in PAN AIR. Several approaches were tested and eliminated because they could not correctly simulate the interference effects. Only one plume modeling technique gave good results. A PAN AIR computation that used a plume shape and inflow velocities obtained from the Navier-Stokes solution for the plume produced results for the effects of power that compared well with experimental data.

  15. Residus de 2-formes differentielles sur les surfaces algebriques et applications aux codes correcteurs d'erreurs

    NASA Astrophysics Data System (ADS)

    Couvreur, A.

    2009-05-01

    The theory of algebraic-geometric codes has been developed in the beginning of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic curve X over a finite field, there are two different constructions of error-correcting codes. The first one, called "functional", uses some rational functions on X and the second one, called "differential", involves some rational 1-forms on this curve. Hundreds of papers are devoted to the study of such codes. In addition, a generalization of the functional construction for algebraic varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A few papers about such codes has been published, but nothing has been done concerning a generalization of the differential construction to the higher-dimensional case. In this thesis, we propose a differential construction of codes on algebraic surfaces. Afterwards, we study the properties of these codes and particularly their relations with functional codes. A pretty surprising fact is that a main difference with the case of curves appears. Indeed, if in the case of curves, a differential code is always the orthogonal of a functional one, this assertion generally fails for surfaces. Last observation motivates the study of codes which are the orthogonal of some functional code on a surface. Therefore, we prove that, under some condition on the surface, these codes can be realized as sums of differential codes. Moreover, we show that some answers to some open problems "a la Bertini" could give very interesting informations on the parameters of these codes.

  16. Nucleon-Nucleon Total Cross Section

    NASA Technical Reports Server (NTRS)

    Norbury, John W.

    2008-01-01

    The total proton-proton and neutron-proton cross sections currently used in the transport code HZETRN show significant disagreement with experiment in the GeV and EeV energy ranges. The GeV range is near the region of maximum cosmic ray intensity. It is therefore important to correct these cross sections, so that predictions of space radiation environments will be accurate. Parameterizations of nucleon-nucleon total cross sections are developed which are accurate over the entire energy range of the cosmic ray spectrum.

  17. Military Interoperable Digital Hospital Testbed (MIDHT)

    DTIC Science & Technology

    2013-10-01

    activities are selected highlights completed by Northrop Grumman during the year. Cycle 4 development: - Increased the max_allowed_packet size in MySQL ...deployment with the Java install that is required by CONNECT v3.3.1.3. - Updated the MIDHT code base to work with the CONNECT v.3.3.1.3 Core Libraries...Provided TATRC the CONNECTUniversalClientGUI binaries for use with CONNECT v3.3.1.3 − Created and deployed a common Java library for the CONNECT

  18. Draft genome sequence of the Coccolithovirus Emiliania huxleyi virus 203.

    PubMed

    Nissimov, Jozef I; Worthy, Charlotte A; Rooks, Paul; Napier, Johnathan A; Kimmance, Susan A; Henn, Matthew R; Ogata, Hiroyuki; Allen, Michael J

    2011-12-01

    The Coccolithoviridae are a recently discovered group of viruses that infect the marine coccolithophorid Emiliania huxleyi. Emiliania huxleyi virus 203 (EhV-203) has a 160- to 180-nm-diameter icosahedral structure and a genome of approximately 400 kbp, consisting of 464 coding sequences (CDSs). Here we describe the genomic features of EhV-203 together with a draft genome sequence and its annotation, highlighting the homology and heterogeneity of this genome in comparison with the EhV-86 reference genome.

  19. Deletion of v-chiA from a baculovirus reduces horizontal transmission in the field

    Treesearch

    Vincent D' Amico; James Slavicek; John D. Podgwaite; Ralph Webb; Roger Fuester; Randall A. Peiffer

    2013-01-01

    Nucleopolyhedroviruses (NPVs) can initiate devastating disease outbreaks in populations of defoliating Lepidoptera, a fact that has been exploited for the purposes of biological control of some pest insects. A key part of the horizontal transmission process of NPVs is the degradation of the larval integument by virus-coded proteins called chitinases, such as V-CHIA...

  20. 49 CFR 178.502 - Identification codes for packagings.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...” means a jerrican. (iv) “4” means a box. (v) “5” means a bag. (vi) “6” means a composite packaging. (vii... natural wood. (iv) “D” means plywood. (v) “F” means reconstituted wood. (vi) “G” means fiberboard. (vii... (other than steel or aluminum). (xi) “P” means glass, porcelain or stoneware. (3) A numeral indicating...

  1. 49 CFR 178.502 - Identification codes for packagings.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...” means a jerrican. (iv) “4” means a box. (v) “5” means a bag. (vi) “6” means a composite packaging. (vii... natural wood. (iv) “D” means plywood. (v) “F” means reconstituted wood. (vi) “G” means fiberboard. (vii... (other than steel or aluminum). (xi) “P” means glass, porcelain or stoneware. (3) A numeral indicating...

  2. 49 CFR 178.502 - Identification codes for packagings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...” means a jerrican. (iv) “4” means a box. (v) “5” means a bag. (vi) “6” means a composite packaging. (vii... natural wood. (iv) “D” means plywood. (v) “F” means reconstituted wood. (vi) “G” means fiberboard. (vii... (other than steel or aluminum). (xi) “P” means glass, porcelain or stoneware. (3) A numeral indicating...

  3. 49 CFR 178.502 - Identification codes for packagings.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...” means a jerrican. (iv) “4” means a box. (v) “5” means a bag. (vi) “6” means a composite packaging. (vii... natural wood. (iv) “D” means plywood. (v) “F” means reconstituted wood. (vi) “G” means fiberboard. (vii... (other than steel or aluminum). (xi) “P” means glass, porcelain or stoneware. (3) A numeral indicating...

  4. 77 FR 76356 - Privacy of Consumer Financial Information Under Title V of the Gramm-Leach-Bliley Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-28

    ... Under Title V of the Gramm-Leach-Bliley Act CFR Correction In Title 17 of the Code of Federal...). (2) Title. (3) Key frame (Why?, What?, How?). (4) Disclosure table (``Reasons we can share your... financial institution provides the model form and that institution is clearly identified in the title on...

  5. Establishing and Maintaining Trust for an Airborne Network. Search and Rescue Enterprise: Security Assessment Report

    DTIC Science & Technology

    2014-12-01

    Area Code) (937) 528-8142 Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18 1 MITCHELL, LOLITA V CIV USAF AFMC AFRL/RYOX To...MITCHELL, LOLITA V CIV USAF AFMC AFRL/RYOX Subject: FW: Final Report Change - Search and Rescue Security Assessment From: J M Schlesselman [mailto:joe

  6. Fusion product losses due to fishbone instabilities in deuterium JET plasmas

    NASA Astrophysics Data System (ADS)

    Kiptily, V. G.; Fitzgerald, M.; Goloborodko, V.; Sharapov, S. E.; Challis, C. D.; Frigione, D.; Graves, J.; Mantsinen, M. J.; Beaumont, P.; Garcia-Munoz, M.; Perez von Thun, C.; Rodriguez, J. F. R.; Darrow, D.; Keeling, D.; King, D.; McClements, K. G.; Solano, E. R.; Schmuck, S.; Sips, G.; Szepesi, G.; Contributors, JET

    2018-01-01

    During development of a high-performance hybrid scenario for future deuterium-tritium experiments on the Joint European Torus, an increased level of fast ion losses in the MeV energy range was observed during the instability of high-frequency n  =  1 fishbones. The fishbones are excited during deuterium neutral beam injection combined with ion cyclotron heating. The frequency range of the fishbones, 10-25 kHz, indicates that they are driven by a resonant interaction with the NBI-produced deuterium beam ions in the energy range  ⩽120 keV. The fast particle losses in a much higher energy range are measured with a fast ion loss detector, and the data show an expulsion of deuterium plasma fusion products, 1 MeV tritons and 3 MeV protons, during the fishbone bursts. An MHD mode analysis with the MISHKA code combined with the nonlinear wave-particle interaction code HAGIS shows that the loss of toroidal symmetry caused by the n  =  1 fishbones affects strongly the confinement of non-resonant high energy fusion-born tritons and protons by perturbing their orbits and expelling them. This modelling is in a good agreement with the experimental data.

  7. Electromagnetic pulse (EMP) coupling codes for use with the vulnerability/lethality (VIL) taxonomy. Final report, June-October 1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mar, M.H.

    1995-07-01

    Based on the vulnerability Lethality (V/L) taxonomy developed by the Ballistic Vulnerability Lethality Division (BVLD) of the Survivability Lethality Analysis Directorate (SLAD), a nuclear electromagnetic pulse (EMP) coupling V/L analysis taxonomy has been developed. A nuclear EMP threat to a military system can be divided into two levels: (1) coupling to a system level through a cable, antenna, or aperture; and (2) the component level. This report will focus on the initial condition, which includes threat definition and target description, as well as the mapping process from the initial condition to damaged components state. EMP coupling analysis at a systemmore » level is used to accomplish this. This report introduces the nature of EMP threat, interaction between the threat and target, and how the output of EMP coupling analysis at a system level becomes the input to the component level analysis. Many different tools (EMP coupling codes) will be discussed for the mapping process, which correponds to the physics of phenomenology. This EMP coupling V/L taxonomy and the models identified in this report will provide the tools necessary to conduct basic V/L analysis of EMP coupling.« less

  8. Spectral and Atomic Physics Analysis of Xenon L-Shell Emission From High Energy Laser Produced Plasmas

    NASA Astrophysics Data System (ADS)

    Thorn, Daniel; Kemp, G. E.; Widmann, K.; Benjamin, R. D.; May, M. J.; Colvin, J. D.; Barrios, M. A.; Fournier, K. B.; Liedahl, D.; Moore, A. S.; Blue, B. E.

    2016-10-01

    The spectrum of the L-shell (n =2) radiation in mid to high-Z ions is useful for probing plasma conditions in the multi-keV temperature range. Xenon in particular with its L-shell radiation centered around 4.5 keV is copiously produced from plasmas with electron temperatures in the 5-10 keV range. We report on a series of time-resolved L-shell Xe spectra measured with the NIF X-ray Spectrometer (NXS) in high-energy long-pulse (>10 ns) laser produced plasmas at the National Ignition Facility. The resolving power of the NXS is sufficiently high (E/ ∂E >100) in the 4-5 keV spectral band that the emission from different charge states is observed. An analysis of the time resolved L-shell spectrum of Xe is presented along with spectral modeling by detailed radiation transport and atomic physics from the SCRAM code and comparison with predictions from HYDRA a radiation-hydrodynamics code with inline atomic-physics from CRETIN. This work was performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344.

  9. Comparison of Geant4-DNA simulation of S-values with other Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    André, T.; Morini, F.; Karamitros, M.; Delorme, R.; Le Loirec, C.; Campos, L.; Champion, C.; Groetz, J.-E.; Fromm, M.; Bordage, M.-C.; Perrot, Y.; Barberet, Ph.; Bernal, M. A.; Brown, J. M. C.; Deleuze, M. S.; Francis, Z.; Ivanchenko, V.; Mascialino, B.; Zacharatou, C.; Bardiès, M.; Incerti, S.

    2014-01-01

    Monte Carlo simulations of S-values have been carried out with the Geant4-DNA extension of the Geant4 toolkit. The S-values have been simulated for monoenergetic electrons with energies ranging from 0.1 keV up to 20 keV, in liquid water spheres (for four radii, chosen between 10 nm and 1 μm), and for electrons emitted by five isotopes of iodine (131, 132, 133, 134 and 135), in liquid water spheres of varying radius (from 15 μm up to 250 μm). The results have been compared to those obtained from other Monte Carlo codes and from other published data. The use of the Kolmogorov-Smirnov test has allowed confirming the statistical compatibility of all simulation results.

  10. The standard operating procedure of the DOE-JGI Metagenome Annotation Pipeline (MAP v.4)

    DOE PAGES

    Huntemann, Marcel; Ivanova, Natalia N.; Mavromatis, Konstantinos; ...

    2016-02-24

    The DOE-JGI Metagenome Annotation Pipeline (MAP v.4) performs structural and functional annotation for metagenomic sequences that are submitted to the Integrated Microbial Genomes with Microbiomes (IMG/M) system for comparative analysis. The pipeline runs on nucleotide sequences provide d via the IMG submission site. Users must first define their analysis projects in GOLD and then submit the associated sequence datasets consisting of scaffolds/contigs with optional coverage information and/or unassembled reads in fasta and fastq file formats. The MAP processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNAs, as well as CRISPR elements. Structural annotation ismore » followed by functional annotation including assignment of protein product names and connection to various protein family databases.« less

  11. The standard operating procedure of the DOE-JGI Metagenome Annotation Pipeline (MAP v.4)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huntemann, Marcel; Ivanova, Natalia N.; Mavromatis, Konstantinos

    The DOE-JGI Metagenome Annotation Pipeline (MAP v.4) performs structural and functional annotation for metagenomic sequences that are submitted to the Integrated Microbial Genomes with Microbiomes (IMG/M) system for comparative analysis. The pipeline runs on nucleotide sequences provide d via the IMG submission site. Users must first define their analysis projects in GOLD and then submit the associated sequence datasets consisting of scaffolds/contigs with optional coverage information and/or unassembled reads in fasta and fastq file formats. The MAP processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNAs, as well as CRISPR elements. Structural annotation ismore » followed by functional annotation including assignment of protein product names and connection to various protein family databases.« less

  12. High-Energy Activation Simulation Coupling TENDL and SPACS with FISPACT-II

    NASA Astrophysics Data System (ADS)

    Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark

    2018-06-01

    To address the needs of activation-transmutation simulation in incident-particle fields with energies above a few hundred MeV, the FISPACT-II code has been extended to splice TENDL standard ENDF-6 nuclear data with extended nuclear data forms. The JENDL-2007/HE and HEAD-2009 libraries were processed for FISPACT-II and used to demonstrate the capabilities of the new code version. Tests of the libraries and comparisons against both experimental yield data and the most recent intra-nuclear cascade model results demonstrate that there is need for improved nuclear data libraries up to and above 1 GeV. Simulations on lead targets show that important radionuclides, such as 148Gd, can vary by more than an order of magnitude where more advanced models find agreement within the experimental uncertainties.

  13. Pion Production from 5-15 GeV Beam for the Neutrino Factory Front-End Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prior, Gersende

    2010-03-30

    For the neutrino factory front-end study, the production of pions from a proton beam of 5-8 and 14 GeV kinetic energy on a Hg jet target has been simulated. The pion yields for two versions of the MARS15 code and two different field configurations have been compared. The particles have also been tracked from the target position down to the end of the cooling channel using the ICOOL code and the neutrino factory baseline lattice. The momentum-angle region of pions producing muons that survived until the end of the cooling channel has been compared with the region covered by HARPmore » data and the number of pions/muons as a function of the incoming beam energy is also reported.« less

  14. NONCODE v2.0: decoding the non-coding.

    PubMed

    He, Shunmin; Liu, Changning; Skogerbø, Geir; Zhao, Haitao; Wang, Jie; Liu, Tao; Bai, Baoyan; Zhao, Yi; Chen, Runsheng

    2008-01-01

    The NONCODE database is an integrated knowledge database designed for the analysis of non-coding RNAs (ncRNAs). Since NONCODE was first released 3 years ago, the number of known ncRNAs has grown rapidly, and there is growing recognition that ncRNAs play important regulatory roles in most organisms. In the updated version of NONCODE (NONCODE v2.0), the number of collected ncRNAs has reached 206 226, including a wide range of microRNAs, Piwi-interacting RNAs and mRNA-like ncRNAs. The improvements brought to the database include not only new and updated ncRNA data sets, but also an incorporation of BLAST alignment search service and access through our custom UCSC Genome Browser. NONCODE can be found under http://www.noncode.org or http://noncode.bioinfo.org.cn.

  15. EASY-II Renaissance: n, p, d, α, γ-induced Inventory Code System

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.

    2014-04-01

    The European Activation SYstem has been re-engineered and re-written in modern programming languages so as to answer today's and tomorrow's needs in terms of activation, transmutation, depletion, decay and processing of radioactive materials. The new FISPACT-II inventory code development project has allowed us to embed many more features in terms of energy range: up to GeV; incident particles: alpha, gamma, proton, deuteron and neutron; and neutron physics: self-shielding effects, temperature dependence and covariance, so as to cover all anticipated application needs: nuclear fission and fusion, accelerator physics, isotope production, stockpile and fuel cycle stewardship, materials characterization and life, and storage cycle management. In parallel, the maturity of modern, truly general purpose libraries encompassing thousands of target isotopes such as TENDL-2012, the evolution of the ENDF-6 format and the capabilities of the latest generation of processing codes PREPRO, NJOY and CALENDF have allowed the activation code to be fed with more robust, complete and appropriate data: cross sections with covariance, probability tables in the resonance ranges, kerma, dpa, gas and radionuclide production and 24 decay types. All such data for the five most important incident particles (n, p, d, α, γ), are placed in evaluated data files up to an incident energy of 200 MeV. The resulting code system, EASY-II is designed as a functional replacement for the previous European Activation System, EASY-2010. It includes many new features and enhancements, but also benefits already from the feedback from extensive validation and verification activities performed with its predecessor.

  16. Network interactions: non-geniculate input to V1.

    PubMed

    Muckli, Lars; Petro, Lucy S

    2013-04-01

    The strongest connections to V1 are fed back from neighbouring area V2 and from a network of higher cortical areas (e.g. V3, V5, LOC, IPS and A1), transmitting the results of cognitive operations such as prediction, attention and imagination. V1 is therefore at the receiving end of a complex cortical processing cascade and not only at the entrance stage of cortical processing of retinal input. One elegant strategy to investigate this information-rich feedback to V1 is to eliminate feedforward input, that is, exploit V1's retinotopic organisation to isolate subregions receiving no direct bottom-up stimulation. We highlight the diverse mechanisms of cortical feedback, ranging from gain control to predictive coding, and conclude that V1 is involved in rich internal communication processes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. The impact of three discharge coding methods on the accuracy of diagnostic coding and hospital reimbursement for inpatient medical care.

    PubMed

    Tsopra, Rosy; Peckham, Daniel; Beirne, Paul; Rodger, Kirsty; Callister, Matthew; White, Helen; Jais, Jean-Philippe; Ghosh, Dipansu; Whitaker, Paul; Clifton, Ian J; Wyatt, Jeremy C

    2018-07-01

    Coding of diagnoses is important for patient care, hospital management and research. However coding accuracy is often poor and may reflect methods of coding. This study investigates the impact of three alternative coding methods on the inaccuracy of diagnosis codes and hospital reimbursement. Comparisons of coding inaccuracy were made between a list of coded diagnoses obtained by a coder using (i)the discharge summary alone, (ii)case notes and discharge summary, and (iii)discharge summary with the addition of medical input. For each method, inaccuracy was determined for the primary, secondary diagnoses, Healthcare Resource Group (HRG) and estimated hospital reimbursement. These data were then compared with a gold standard derived by a consultant and coder. 107 consecutive patient discharges were analysed. Inaccuracy of diagnosis codes was highest when a coder used the discharge summary alone, and decreased significantly when the coder used the case notes (70% vs 58% respectively, p < 0.0001) or coded from the discharge summary with medical support (70% vs 60% respectively, p < 0.0001). When compared with the gold standard, the percentage of incorrect HRGs was 42% for discharge summary alone, 31% for coding with case notes, and 35% for coding with medical support. The three coding methods resulted in an annual estimated loss of hospital remuneration of between £1.8 M and £16.5 M. The accuracy of diagnosis codes and percentage of correct HRGs improved when coders used either case notes or medical support in addition to the discharge summary. Further emphasis needs to be placed on improving the standard of information recorded in discharge summaries. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Mistranslation: from adaptations to applications.

    PubMed

    Hoffman, Kyle S; O'Donoghue, Patrick; Brandl, Christopher J

    2017-11-01

    The conservation of the genetic code indicates that there was a single origin, but like all genetic material, the cell's interpretation of the code is subject to evolutionary pressure. Single nucleotide variations in tRNA sequences can modulate codon assignments by altering codon-anticodon pairing or tRNA charging. Either can increase translation errors and even change the code. The frozen accident hypothesis argued that changes to the code would destabilize the proteome and reduce fitness. In studies of model organisms, mistranslation often acts as an adaptive response. These studies reveal evolutionary conserved mechanisms to maintain proteostasis even during high rates of mistranslation. This review discusses the evolutionary basis of altered genetic codes, how mistranslation is identified, and how deviations to the genetic code are exploited. We revisit early discoveries of genetic code deviations and provide examples of adaptive mistranslation events in nature. Lastly, we highlight innovations in synthetic biology to expand the genetic code. The genetic code is still evolving. Mistranslation increases proteomic diversity that enables cells to survive stress conditions or suppress a deleterious allele. Genetic code variants have been identified by genome and metagenome sequence analyses, suppressor genetics, and biochemical characterization. Understanding the mechanisms of translation and genetic code deviations enables the design of new codes to produce novel proteins. Engineering the translation machinery and expanding the genetic code to incorporate non-canonical amino acids are valuable tools in synthetic biology that are impacting biomedical research. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Spallation reactions: A successful interplay between modeling and applications

    NASA Astrophysics Data System (ADS)

    David, J.-C.

    2015-06-01

    The spallation reactions are a type of nuclear reaction which occur in space by interaction of the cosmic rays with interstellar bodies. The first spallation reactions induced with an accelerator took place in 1947 at the Berkeley cyclotron (University of California) with 200MeV deuterons and 400MeV alpha beams. They highlighted the multiple emission of neutrons and charged particles and the production of a large number of residual nuclei far different from the target nuclei. In the same year, R. Serber described the reaction in two steps: a first and fast one with high-energy particle emission leading to an excited remnant nucleus, and a second one, much slower, the de-excitation of the remnant. In 2010 IAEA organized a workshop to present the results of the most widely used spallation codes within a benchmark of spallation models. If one of the goals was to understand the deficiencies, if any, in each code, one remarkable outcome points out the overall high-quality level of some models and so the great improvements achieved since Serber. Particle transport codes can then rely on such spallation models to treat the reactions between a light particle and an atomic nucleus with energies spanning from few tens of MeV up to some GeV. An overview of the spallation reactions modeling is presented in order to point out the incomparable contribution of models based on basic physics to numerous applications where such reactions occur. Validations or benchmarks, which are necessary steps in the improvement process, are also addressed, as well as the potential future domains of development. Spallation reactions modeling is a representative case of continuous studies aiming at understanding a reaction mechanism and which end up in a powerful tool.

  20. Neutron-induced fission cross-section measurement of 234U with quasi-monoenergetic beams in the keV and MeV range using micromegas detectors

    NASA Astrophysics Data System (ADS)

    Tsinganis, A.; Kokkoris, M.; Vlastou, R.; Kalamara, A.; Stamatopoulos, A.; Kanellakopoulos, A.; Lagoyannis, A.; Axiotis, M.

    2017-09-01

    Accurate data on neutron-induced fission cross-sections of actinides are essential for the design of advanced nuclear reactors based either on fast neutron spectra or alternative fuel cycles, as well as for the reduction of safety margins of existing and future conventional facilities. The fission cross-section of 234U was measured at incident neutron energies of 560 and 660 keV and 7.5 MeV with a setup based on `microbulk' Micromegas detectors and the same samples previously used for the measurement performed at the CERN n_TOF facility (Karadimos et al., 2014). The 235U fission cross-section was used as reference. The (quasi-)monoenergetic neutron beams were produced via the 7Li(p,n) and the 2H(d,n) reactions at the neutron beam facility of the Institute of Nuclear and Particle Physics at the `Demokritos' National Centre for Scientific Research. A detailed study of the neutron spectra produced in the targets and intercepted by the samples was performed coupling the NeuSDesc and MCNPX codes, taking into account the energy spread, energy loss and angular straggling of the beam ions in the target assemblies, as well as contributions from competing reactions and neutron scattering in the experimental setup. Auxiliary Monte-Carlo simulations were performed with the FLUKA code to study the behaviour of the detectors, focusing particularly on the reproduction of the pulse height spectra of α-particles and fission fragments (using distributions produced with the GEF code) for the evaluation of the detector efficiency. An overview of the developed methodology and preliminary results are presented.

  1. Federal Logistics Information System (FLIS) Procedures Manual, Volume 4. Item Identification.

    DTIC Science & Technology

    1995-01-01

    Functional I DRMS Defense Reutilization 1,15 Description and Marketing FDM Full Descriptive 2 Service Method (Item DPSC Defense Personnel 2,13,14...under DIC KRE, return code ment or segment mix of FLIS data. For interna- AU. tional cataloging, only one Output Data RequestV Code may be used per...Screening Results) with KMR (Matching NATO Maintenance and Supply Agency (NAMSA), Reference-Screening) and either KFC (File Data the custodian for control

  2. Analysis Code - Data Analysis in 'Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications' (LMSMIPNFA) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R

    R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.

  3. Girls Who Code Club | College of Engineering & Applied Science

    Science.gov Websites

    A B C D E F G H I J K L M N O P Q R S T U V W X Y Z D2L PAWS Email My UW-System About UWM UWM Jobs D2L PAWS Email My UW-System University of Wisconsin-Milwaukee College ofEngineering & Olympiad Girls Who Code Club FIRST Tech Challenge NSF I-Corps Site of Southeastern Wisconsin UW-Milwaukee

  4. ANITA-IEAF activation code package - updating of the decay and cross section data libraries and validation on the experimental data from the Karlsruhe Isochronous Cyclotron

    NASA Astrophysics Data System (ADS)

    Frisoni, Manuela

    2017-09-01

    ANITA-IEAF is an activation package (code and libraries) developed in the past in ENEA-Bologna in order to assess the activation of materials exposed to neutrons with energies greater than 20 MeV. An updated version of the ANITA-IEAF activation code package has been developed. It is suitable to be applied to the study of the irradiation effects on materials in facilities like the International Fusion Materials Irradiation Facility (IFMIF) and the DEMO Oriented Neutron Source (DONES), in which a considerable amount of neutrons with energies above 20 MeV is produced. The present paper summarizes the main characteristics of the updated version of ANITA-IEAF, able to use decay and cross section data based on more recent evaluated nuclear data libraries, i.e. the JEFF-3.1.1 Radioactive Decay Data Library and the EAF-2010 neutron activation cross section library. In this paper the validation effort related to the comparison between the code predictions and the activity measurements obtained from the Karlsruhe Isochronous Cyclotron is presented. In this integral experiment samples of two different steels, SS-316 and F82H, pure vanadium and a vanadium alloy, structural materials of interest in fusion technology, were activated in a neutron spectrum similar to the IFMIF neutron field.

  5. Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.

    2009-01-01

    Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

  6. Neighboring block based disparity vector derivation for multiview compatible 3D-AVC

    NASA Astrophysics Data System (ADS)

    Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta

    2013-09-01

    3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.

  7. A discriminative test among the different theories proposed to explain the origin of the genetic code: the coevolution theory finds additional support.

    PubMed

    Giulio, Massimo Di

    2018-05-19

    A discriminative statistical test among the different theories proposed to explain the origin of the genetic code is presented. Gathering the amino acids into polarity and biosynthetic classes that are the first expression of the physicochemical theory of the origin of the genetic code and the second expression of the coevolution theory, these classes are utilized in the Fisher's exact test to establish their significance within the genetic code table. Linking to the rows and columns of the genetic code of probabilities that express the statistical significance of these classes, I have finally been in the condition to be able to calculate a χ value to link to both the physicochemical theory and to the coevolution theory that would express the corroboration level referred to these theories. The comparison between these two χ values showed that the coevolution theory is able to explain - in this strictly empirical analysis - the origin of the genetic code better than that of the physicochemical theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  9. Characterization of Plasma Flow through Magnetic Nozzles

    DTIC Science & Technology

    1990-02-01

    DireeuI or, As! vookul tea I Scimet Divisiolu SECURITY CLASSIFICATION OF THIS PAGe..E. REPORT DOCUMENTATION PAGE M Form Approved Ia. REPORT SECURITY...v ACKNOW LEDGM ENTS ............................................................ v EXECUTIVE SUM M ARY...onvective Tranlsport ...... 43 .. .. aio Lo m .,..".h.,..... m 5 i lI, IVt LIl INT AND APPICATION OF A CODE FOR STEADY IDEAL MHID 1PLOW T1ROUW3iI

  10. Preparation and Properties of the System Cr2-xRhxo3(2 or = x or =0).

    DTIC Science & Technology

    1988-01-26

    Bernard Ocuda INaval We.ioons Center , n~o or v.€,,*,Naval Weapons Support Center Attn: r. Pon Atkins , Nh.0. S * ovqb$ royl eo s rcft Lao rat’ oy Code...8217;ience Folidation. REFERENCES 1. D. D. Khanolkar, Current Science 30 (2), (1961), 52. 2. I. S. Shaplygin. 1. 1. Prosychev, V. B. Lazarev (Zh. Neora. Khim

  11. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  12. Estimating neutron dose equivalent rates from heavy ion reactions around 10 MeV amu(-1) using the PHITS code.

    PubMed

    Iwamoto, Yosuke; Ronningen, R M; Niita, Koji

    2010-04-01

    It has been sometimes necessary for personnel to work in areas where low-energy heavy ions interact with targets or with beam transport equipment and thereby produce significant levels of radiation. Methods to predict doses and to assist shielding design are desirable. The Particle and Heavy Ion Transport code System (PHITS) has been typically used to predict radiation levels around high-energy (above 100 MeV amu(-1)) heavy ion accelerator facilities. However, predictions by PHITS of radiation levels around low-energy (around 10 MeV amu(-1)) heavy ion facilities to our knowledge have not yet been investigated. The influence of the "switching time" in PHITS calculations of low-energy heavy ion reactions, defined as the time when the JAERI Quantum Molecular Dynamics model (JQMD) calculation stops and the Generalized Evaporation Model (GEM) calculation begins, was studied using neutron energy spectra from 6.25 MeV amu(-1) and 10 MeV amu(-1) (12)C ions and 10 MeV amu(-1) (16)O ions incident on a copper target. Using a value of 100 fm c(-1) for the switching time, calculated neutron energy spectra obtained agree well with the experimental data. PHITS was then used with the switching time of 100 fm c(-1) to simulate an experimental study by Ohnesorge et al. by calculating neutron dose equivalent rates produced by 3 MeV amu(-1) to 16 MeV amu(-1) (12)C, (14)N, (16)O, and (20)Ne beams incident on iron, nickel and copper targets. The calculated neutron dose equivalent rates agree very well with the data and follow a general pattern which appears to be insensitive to the heavy ion species but is sensitive to the target material.

  13. Organ dose conversion coefficients based on a voxel mouse model and MCNP code for external photon irradiation.

    PubMed

    Zhang, Xiaomin; Xie, Xiangdong; Cheng, Jie; Ning, Jing; Yuan, Yong; Pan, Jie; Yang, Guoshan

    2012-01-01

    A set of conversion coefficients from kerma free-in-air to the organ absorbed dose for external photon beams from 10 keV to 10 MeV are presented based on a newly developed voxel mouse model, for the purpose of radiation effect evaluation. The voxel mouse model was developed from colour images of successive cryosections of a normal nude male mouse, in which 14 organs or tissues were segmented manually and filled with different colours, while each colour was tagged by a specific ID number for implementation of mouse model in Monte Carlo N-particle code (MCNP). Monte Carlo simulation with MCNP was carried out to obtain organ dose conversion coefficients for 22 external monoenergetic photon beams between 10 keV and 10 MeV under five different irradiation geometries conditions (left lateral, right lateral, dorsal-ventral, ventral-dorsal, and isotropic). Organ dose conversion coefficients were presented in tables and compared with the published data based on a rat model to investigate the effect of body size and weight on the organ dose. The calculated and comparison results show that the organ dose conversion coefficients varying the photon energy exhibits similar trend for most organs except for the bone and skin, and the organ dose is sensitive to body size and weight at a photon energy approximately <0.1 MeV.

  14. BADGER v1.0: A Fortran equation of state library

    NASA Astrophysics Data System (ADS)

    Heltemes, T. A.; Moses, G. A.

    2012-12-01

    The BADGER equation of state library was developed to enable inertial confinement fusion plasma codes to more accurately model plasmas in the high-density, low-temperature regime. The code had the capability to calculate 1- and 2-T plasmas using the Thomas-Fermi model and an individual electron accounting model. Ion equation of state data can be calculated using an ideal gas model or via a quotidian equation of state with scaled binding energies. Electron equation of state data can be calculated via the ideal gas model or with an adaptation of the screened hydrogenic model with ℓ-splitting. The ionization and equation of state calculations can be done in local thermodynamic equilibrium or in a non-LTE mode using a variant of the Busquet equivalent temperature method. The code was written as a stand-alone Fortran library for ease of implementation by external codes. EOS results for aluminum are presented that show good agreement with the SESAME library and ionization calculations show good agreement with the FLYCHK code. Program summaryProgram title: BADGERLIB v1.0 Catalogue identifier: AEND_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEND_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 41 480 No. of bytes in distributed program, including test data, etc.: 2 904 451 Distribution format: tar.gz Programming language: Fortran 90. Computer: 32- or 64-bit PC, or Mac. Operating system: Windows, Linux, MacOS X. RAM: 249.496 kB plus 195.630 kB per isotope record in memory Classification: 19.1, 19.7. Nature of problem: Equation of State (EOS) calculations are necessary for the accurate simulation of high energy density plasmas. Historically, most EOS codes used in these simulations have relied on an ideal gas model. This model is inadequate for low-temperature, high-density plasma conditions; the gaseous and liquid phases; and the solid phase. The BADGER code was developed to give more realistic EOS data in these regimes. Solution method: BADGER has multiple, user-selectable models to treat the ions, average-atom ionization state and electrons. Ion models are ideal gas and quotidian equation of state (QEOS), ionization models are Thomas-Fermi and individual accounting method (IEM) formulation of the screened hydrogenic model (SHM) with l-splitting, electron ionization models are ideal gas and a Helmholtz free energy minimization method derived from the SHM. The default equation of state and ionization models are appropriate for plasmas in local thermodynamic equilibrium (LTE). The code can calculate non-LTE equation of state (EOS) and ionization data using a simplified form of the Busquet equivalent-temperature method. Restrictions: Physical data are only provided for elements Z=1 to Z=86. Multiple solid phases are not currently supported. Liquid, gas and plasma phases are combined into a generalized "fluid" phase. Unusual features: BADGER divorces the calculation of average-atom ionization from the electron equation of state model, allowing the user to select ionization and electron EOS models that are most appropriate to the simulation. The included ion ideal gas model uses ground-state nuclear spin data to differentiate between isotopes of a given element. Running time: Example provided only takes a few seconds to run.

  15. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  16. Hard gamma radiation background from coding collimator of gamma telescope under space experiment conditions

    NASA Astrophysics Data System (ADS)

    Aleksandrov, A. P.; Berezovoy, A. N.; Galper, A. M.; Grachev, V. M.; Dmitrenko, V. V.; Kirillov-Ugryumov, V. G.; Lebedev, V. V.; Lyakhov, V. A.; Moiseyev, A. A.; Ulin, S. Y.

    1985-09-01

    Coding collimators are used to improve the angular resolution of gamma-ray telescopes at energies above 50 MeV. However, the interaction of cosmic rays with the collimation material can lead to the appearance of a gamma-ray background flux which can have a deleterious effect on measurement efficiency. An experiment was performed on the Salyut-6-Soyuz spacecraft system with the Elena-F small-scale gamma-ray telescope in order to measure the magnitude of this background. It is shown that, even at a zenith angle of approximately zero degrees (the angle at which the gamma-ray observations are made), the coding collimator has only an insignificant effect on the background conditions.

  17. A Viscoelastic-Plastic Constitutive Model with a Finite Element Solution Methodology

    DTIC Science & Technology

    1978-06-01

    where - r3 K f BT D B dv (4-15) • ,re E,,rae v ’,vp ,vp w F BT dv (4-17)A -vp -Vp 84 ii T In the above, K is the global viscoelastic stiffness matrix anl ...Code C4AA Port Hueneme. CA NAVSE ASYSCOM Code OOC (LT R. MacDougisl). Washington DC NAVSEC Code 6034 1 Library). Washington DC NAVSEC61RLACT PWO. Torni...ESEARCH CO LA HABRA, CA iBROOKSi 0ONCRFE It Il FCH-NoIOGY CORP. TACOMA. ’A At( ANL )ESONi ((tNRAI) ASSOC. Van NuNs CA iA. Luisonit I)RA Vt COR(P I’muitt

  18. Automatic coding and selection of causes of death: an adaptation of Iris software for using in Brazil.

    PubMed

    Martins, Renata Cristófani; Buchalla, Cassia Maria

    2015-01-01

    To prepare a dictionary in Portuguese for using in Iris and to evaluate its completeness for coding causes of death. Iniatially, a dictionary with all illness and injuries was created based on the International Classification of Diseases - tenth revision (ICD-10) codes. This dictionary was based on two sources: the electronic file of ICD-10 volume 1 and the data from Thesaurus of the International Classification of Primary Care (ICPC-2). Then, a death certificate sample from the Program of Improvement of Mortality Information in São Paulo (PRO-AIM) was coded manually and by Iris version V4.0.34, and the causes of death were compared. Whenever Iris was not able to code the causes of death, adjustments were made in the dictionary. Iris was able to code all causes of death in 94.4% death certificates, but only 50.6% were directly coded, without adjustments. Among death certificates that the software was unable to fully code, 89.2% had a diagnosis of external causes (chapter XX of ICD-10). This group of causes of death showed less agreement when comparing the coding by Iris to the manual one. The software performed well, but it needs adjustments and improvement in its dictionary. In the upcoming versions of the software, its developers are trying to solve the external causes of death problem.

  19. Monte Carlo simulation of proton track structure in biological matter

    DOE PAGES

    Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.; ...

    2017-05-25

    Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less

  20. CEPXS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-19

    CEPXS is a multigroup-Legendre cross-section generating code. The cross sections produced by CEPXS enable coupled electron-photon transport calculations to be performed with multigroup radiation transport codes, e.g. MITS and SCEPTRE. CEPXS generates multigroup-Legendre cross sections for photons, electrons and positrons over the energy range from 100 MeV to 1.0 keV. The continuous slowing-down approximation is used for those electron interactions that result in small-energy losses. The extended transport correction is applied to the forward-peaked elastic scattering cross section for electrons. A standard multigroup-Legendre treatment is used for the other coupled electron-photon cross sections. CEPXS extracts electron cross-section information from themore » DATAPAC data set and photon cross-section information from Biggs-Lighthill data. The model that is used for ionization/relaxation in CEPXS is essentially the same as that employed in ITS.« less

  1. Multimode imaging device

    DOEpatents

    Mihailescu, Lucian; Vetter, Kai M

    2013-08-27

    Apparatus for detecting and locating a source of gamma rays of energies ranging from 10-20 keV to several MeV's includes plural gamma ray detectors arranged in a generally closed extended array so as to provide Compton scattering imaging and coded aperture imaging simultaneously. First detectors are arranged in a spaced manner about a surface defining the closed extended array which may be in the form a circle, a sphere, a square, a pentagon or higher order polygon. Some of the gamma rays are absorbed by the first detectors closest to the gamma source in Compton scattering, while the photons that go unabsorbed by passing through gaps disposed between adjacent first detectors are incident upon second detectors disposed on the side farthest from the gamma ray source, where the first spaced detectors form a coded aperture array for two or three dimensional gamma ray source detection.

  2. Energy levels and radiative rates for transitions in Cr-like Co IV and Ni V

    NASA Astrophysics Data System (ADS)

    Aggarwal, K. M.; Bogdanovich, P.; Karpuškienė, R.; Keenan, F. P.; Kisielius, R.; Stancalie, V.

    2016-01-01

    We report calculations of energy levels and radiative rates (A-values) for transitions in Cr-like Co IV and Ni V. The quasi-relativistic Hartree-Fock (QRHF) code is adopted for calculating the data although GRASP (general-purpose relativistic atomic structure package) and flexible atomic code (FAC) have also been employed for comparison purposes. No radiative rates are available in the literature to compare with our results, but our calculated energies are in close agreement with those compiled by NIST for a majority of the levels. However, there are discrepancies for a few levels of up to 3%. The A-values are listed for all significantly contributing E1, E2 and M1 transitions, and the corresponding lifetimes reported, although unfortunately no previous theoretical or experimental results exist to compare with our data.

  3. Calculation of conversion coefficients for clinical photon spectra using the MCNP code.

    PubMed

    Lima, M A F; Silva, A X; Crispim, V R

    2004-01-01

    In this work, the MCNP4B code has been employed to calculate conversion coefficients from air kerma to the ambient dose equivalent, H*(10)/Ka, for monoenergetic photon energies from 10 keV to 50 MeV, assuming the kerma approximation. Also estimated are the H*(10)/Ka for photon beams produced by linear accelerators, such as Clinac-4 and Clinac-2500, after transmission through primary barriers of radiotherapy treatment rooms. The results for the conversion coefficients for monoenergetic photon energies, with statistical uncertainty <2%, are compared with those in ICRP publication 74 and good agreements were obtained. The conversion coefficients calculated for real clinic spectra transmitted through walls of concrete of 1, 1.5 and 2 m thick, are in the range of 1.06-1.12 Sv Gy(-1).

  4. Monte Carlo simulation of proton track structure in biological matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.

    Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less

  5. Energy deposition calculated by PHITS code in Pb spallation target

    NASA Astrophysics Data System (ADS)

    Yu, Quanzhi

    2016-01-01

    Energy deposition in a Pb spallation target irradiated by high energetic protons was calculated by PHITS2.52 code. The validation of the energy deposition and neutron production calculated by PHITS code was performed. Results show good agreements between the simulation results and the experimental data. Detailed comparison shows that for the total energy deposition, PHITS simulation result was about 15% overestimation than that of the experimental data. For the energy deposition along the length of the Pb target, the discrepancy mainly presented at the front part of the Pb target. Calculation indicates that most of the energy deposition comes from the ionizations of the primary protons and the produced secondary particles. With the event generator mode of PHITS, the deposit energy distribution for the particles and the light nulclei is presented for the first time. It indicates that the primary protons with energy more than 100 MeV are the most contributors to the total energy deposition. The energy depositions peaking at 10 MeV and 0.1 MeV, are mainly caused by the electrons, pions, d, t, 3He and also α particles during the cascade process and the evaporation process, respectively. The energy deposition density caused by different proton beam profiles are also calculated and compared. Such calculation and analyses are much helpful for better understanding the physical mechanism of energy deposition in the spallation target, and greatly useful for the thermal hydraulic design of the spallation target.

  6. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  7. Formal Compiler Implementation in a Logical Framework

    DTIC Science & Technology

    2003-04-29

    variable set [], we omit the brackets and use the simpler notation v. MetaPRL is a tactic-based prover that uses OCaml [20] as its meta-language. When a...rewrite is defined in MetaPRL, the framework creates an OCaml expression that can be used to apply the rewrite. Code to guide the application of...rewrites is written in OCaml , using a rich set of primitives provided by MetaPRL. MetaPRL automates the construction of most guidance code; we describe

  8. Robust Control of Multivariable and Large Scale Systems.

    DTIC Science & Technology

    1986-03-14

    AD-A175 $5B ROBUST CONTROL OF MULTIVRRIALE AND LARG SCALE SYSTEMS V2 R75 (U) HONEYWELL SYSTEMS AND RESEARCH CENTER MINNEAPOLIS MN J C DOYLE ET AL...ONIJQ 86 R alFS ja ,.AMIECFOEPF:ORMING ORGANIZATION So OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATI ON jonevwell Systems & Research If 4000c" Air...Force Office of Scientific Research .~ C :AE S C.rv. Stare arma ZIP Code) 7C ADDRESS (Crty. Stare. am ZIP Code, *3660 Marshall Street NE Building 410

  9. SSPARAMA: A Nonlinear, Wave Optics Multipulse (and CW) Steady-State Propagation Code with Adaptive Coordinates

    DTIC Science & Technology

    1977-02-10

    RL Report SUM F ~ SSPARAMA: A Nonlinear, Wave Optics Multipulse (and CW) Steady-State Propagation * Code with Adaptive Coordinates K. G. WHIITNEY...ie rmtu o- a ~e oD DISCLAIMER NOTICE THIS DOCUMENT IS BEST QUALITY AVAILABLE. THE COPY FURNISHED TO DTIC CONTAINED A SIGNIFICANT NUMBER OF PAGES WHICH...DO NOT REPRODUCE LEGIBLY. SECU RITY CL ASSI FICATION OF TVII, PAZOE Fl?l ba PJM 0vI,.j REPOR DOCMENTTIONPAGEREAL) INS~TRUCTION~S REPOT DOUMENATIO PAG

  10. General 3D Airborne Antenna Radiation Pattern Code Users Manual.

    DTIC Science & Technology

    1983-02-01

    AD-A 30 359 GENERAL 3D AIRBORNEANTENNA RADIATION PATTERN CODE USERS MANUA (U) OHIO STATE UNIV COLUMBUS ELECTROSCIENCE LAB H HCHUNGET AL FEB 83 RADC...F30602-79-C-0068 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASKAREA A WORK UNIT NUMEEfRS The Ohio State University...Computer Program 20, ABSTRACT (Coaffivme on reverse side it ntecessar a" 141etifIr &V block mUbef) This report describes a computer program and how it may

  11. Diffusion and Equilibrium Swelling of Macromolecular Networks by Their Linear Homologs.

    DTIC Science & Technology

    1982-10-01

    C/ . 29 OYN 6/81 DISTRIBUTION LIST No. Copies No. Copies Dr. L.V. Schmtdt 1 Dr. F. Roberto 1 Assistant Secretary of the Navy Code AFRPL MKPA (RE, and...Scientific Advisor Directorate of Aerospace Sciences Commandant of the Marine Corps Bolling Air Force Base Code RD-1 Washington, D.C. 20332 Washington...Directorate of Chemical Sciences Arlington VA 22217 Bolling Air Force Base t VWashington, D.C. 20332 Mr. David Siegel Office of Naval Research Dr. John S

  12. Threshold Tear Strength of Elastomers.

    DTIC Science & Technology

    1982-04-01

    O 0 0 C~j C-J C7,) 0DY 122 DISTRIBUTION LIST No. Cooes No. Cooies Dr. L.V. Schmidt I Dr. F. Roberto 1 ’Assistant Secretary of the Navy Code AFRPL MKPA...Research Scientific Advisor Directorate of Aerospace Sciences Commandant of the Marine Corps Bolling Air Force Base Code RD-i Washington, D.C. 20332...Directorate of Chemical Sciences Arlington, VA 22217 Bolling Air Force Base Washington, D.C. 20332 Mr. David Siegel D Office of Naval Research Dr. John S

  13. Ex-Situ and In-Situ Ellipsometric Studies of the Thermal Oxide on InP

    DTIC Science & Technology

    1990-12-06

    ion---- Distribution/ Availabilit ? Codes£v l llt Codes Avail and/or Dist| Special Abstract The thermally grown InP oxide as etched by an aqueous...aqueous NH4OH/NH4F, and Law(17) has reported observations of orientational ordering of water and organic solvents on pyrex surfaces by in-situ...minutes, followed by a sequence of acetone, deionized water (d. i. water ) rinse. After being dipped in a concentrated aqueous HF solution for 15 seconds

  14. Transient Heat Transfer in Coated Superconductors.

    DTIC Science & Technology

    1982-10-29

    of the use of the SCEPTRE code are contained in the instruction manual and the book on the code. 30 An example of an actual SCEPTRE program is given in...22. 0. Tsukomoto and S. Kobayashi, J. of Appl. Physics, 46, 1359, (1975) 23. Y Iwasa and B.A. Apgar , Cryogenics 18, 267, (1978) 24. D.E. Baynham, V.W...Computer program for circuit and Systems Analysis. Prentice Hall 1971 and J.C. Bowers et. al. Users Manual for Super-Sceptre Government Document AD/A-OIl

  15. 1DTempPro V2: new features for inferring groundwater/surface-water exchange

    USGS Publications Warehouse

    Koch, Franklin W.; Voytek, Emily B.; Day-Lewis, Frederick D.; Healy, Richard W.; Briggs, Martin A.; Lane, John W.; Werkema, Dale D.

    2016-01-01

    A new version of the computer program 1DTempPro extends the original code to include new capabilities for (1) automated parameter estimation, (2) layer heterogeneity, and (3) time-varying specific discharge. The code serves as an interface to the U.S. Geological Survey model VS2DH and supports analysis of vertical one-dimensional temperature profiles under saturated flow conditions to assess groundwater/surface-water exchange and estimate hydraulic conductivity for cases where hydraulic head is known.

  16. Stopping power for 4.8-6.8 MeV C ions along [1 1 0] and [1 1 1] directions in Si

    NASA Astrophysics Data System (ADS)

    Yoneda, Tomoaki; Horikawa, Junsei; Saijo, Satoshi; Arakawa, Masakazu; Yamamoto, Yukio; Yamamoto, Yasukazu

    2018-06-01

    The stopping power for C ions with energies in the range of 4.8-6.8 MeV were investigated in a SIMOX (Separation by IMplanted OXygen into silicon) structure of Si(1 0 0)/SiO2/Si(1 0 0). Backscattering spectra were measured for random and channeling incidence along the [1 1 0] and [1 1 1] axes. The scattering angle was set to 90° to avoid an excessive decrease of the kinematic factor. The ratios of [1 1 0] and [1 1 1] channeling to the random stopping power were determined to be around 0.65 and 0.77 for 4.8-6.8 MeV ions, respectively. The validity of the impact parameter dependent stopping power calculated using Grande and Schiwietz's CasP (convolution approximation for swift particles) code was confirmed. The C ion trajectories and flux distributions in crystalline silicon were calculated by Monte Carlo simulation. The stopping power calculated with the CasP code is almost in agreement with the experimental results within the accuracy of measurement.

  17. Type IV pili of Acidithiobacillus ferrooxidans can transfer electrons from extracellular electron donors.

    PubMed

    Li, Yongquan; Li, Hongyu

    2014-03-01

    Studies on Acidithiobacillus ferrooxidans accepting electrons from Fe(II) have previously focused on cytochrome c. However, we have discovered that, besides cytochrome c, type IV pili (Tfp) can transfer electrons. Here, we report conduction by Tfp of A. ferrooxidans analyzed with a conducting-probe atomic force microscope (AFM). The results indicate that the Tfp of A. ferrooxidans are highly conductive. The genome sequence of A. ferrooxidans ATCC 23270 contains two genes, pilV and pilW, which code for pilin domain proteins with the conserved amino acids characteristic of Tfp. Multiple alignment analysis of the PilV and PilW (pilin) proteins indicated that pilV is the adhesin gene while pilW codes for the major protein element of Tfp. The likely function of Tfp is to complete the circuit between the cell surface and Fe(II) oxides. These results indicate that Tfp of A. ferrooxidans might serve as biological nanowires transferring electrons from the surface of Fe(II) oxides to the cell surface. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Three dimensional equilibrium solutions for a current-carrying reversed-field pinch plasma with a close-fitting conducting shell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koliner, J. J.; Boguski, J., E-mail: boguski@wisc.edu; Anderson, J. K.

    2016-03-15

    In order to characterize the Madison Symmetric Torus (MST) reversed-field pinch (RFP) plasmas that bifurcate to a helical equilibrium, the V3FIT equilibrium reconstruction code was modified to include a conducting boundary. RFP plasmas become helical at a high plasma current, which induces large eddy currents in MST's thick aluminum shell. The V3FIT conducting boundary accounts for the contribution from these eddy currents to external magnetic diagnostic coil signals. This implementation of V3FIT was benchmarked against MSTFit, a 2D Grad-Shafranov solver, for axisymmetric plasmas. The two codes both fit B{sub θ} measurement loops around the plasma minor diameter with qualitative agreementmore » between each other and the measured field. Fits in the 3D case converge well, with q-profile and plasma shape agreement between two distinct toroidal locking phases. Greater than 60% of the measured n = 5 component of B{sub θ} at r = a is due to eddy currents in the shell, as calculated by the conducting boundary model.« less

  19. Three dimensional equilibrium solutions for a current-carrying reversed-field pinch plasma with a close-fitting conducting shell

    DOE PAGES

    Koliner, J. J.; Boguski, J.; Anderson, J. K.; ...

    2016-03-25

    In order to characterize the Madison Symmetric Torus (MST) reversed-field pinch(RFP)plasmas that bifurcate to a helical equilibrium, the V3FIT equilibrium reconstruction code was modified to include a conducting boundary. RFPplasmas become helical at a high plasma current, which induces large eddy currents in MST's thick aluminum shell. The V3FIT conducting boundary accounts for the contribution from these eddy currents to external magnetic diagnostic coil signals. This implementation of V3FIT was benchmarked against MSTFit, a 2D Grad-Shafranov solver, for axisymmetric plasmas. The two codes both fit B measurement loops around the plasma minor diameter with qualitative agreement between each other andmore » the measured field. Fits in the 3D case converge well, with q-profile and plasma shape agreement between two distinct toroidal locking phases. Greater than 60% of the measured n = 5 component of B at r = a is due to eddy currents in the shell, as calculated by the conducting boundary model.« less

  20. Study of neutron spectra in a water bath from a Pb target irradiated by 250 MeV protons

    NASA Astrophysics Data System (ADS)

    Li, Yan-Yan; Zhang, Xue-Ying; Ju, Yong-Qin; Ma, Fei; Zhang, Hong-Bin; Chen, Liang; Ge, Hong-Lin; Wan, Bo; Luo, Peng; Zhou, Bin; Zhang, Yan-Bin; Li, Jian-Yang; Xu, Jun-Kui; Wang, Song-Lin; Yang, Yong-Wei; Yang, Lei

    2015-04-01

    Spallation neutrons were produced by the irradiation of Pb with 250 MeV protons. The Pb target was surrounded by water which was used to slow down the emitted neutrons. The moderated neutrons in the water bath were measured by using the resonance detectors of Au, Mn and In with a cadmium (Cd) cover. According to the measured activities of the foils, the neutron flux at different resonance energies were deduced and the epithermal neutron spectra were proposed. Corresponding results calculated with the Monte Carlo code MCNPX were compared with the experimental data to check the validity of the code. The comparison showed that the simulation could give a good prediction for the neutron spectra above 50 eV, while the finite thickness of the foils greatly effected the experimental data in low energy. It was also found that the resonance detectors themselves had great impact on the simulated energy spectra. Supported by National Natural Science Foundation and Strategic Priority Research Program of the Chinese Academy of Sciences (11305229, 11105186, 91226107, 91026009, XDA03030300)

  1. A search for pulsations in two Algol-type systems V1241 Tau and GQ Dra

    NASA Astrophysics Data System (ADS)

    Ulaş, Burak; Ulusoy, Ceren; Gazeas, Kosmas; Erkan, Naci; Liakos, Alexios

    2014-02-01

    We present new photometric observations of two eclipsing binary systems, V1241 Tau and GQ Dra. We use the following methodology: initially, the Wilson-Devinney code is applied to the light curves in order to determine the photometric elements of the systems. Then, the residuals are analysed using Fourier techniques. The results are the following. One frequency can be possibly attributed to a real light variation of V1241 Tau, while there is no evidence of pulsations in the light curve of GQ Dra.

  2. The Relevance of the De Broglie Velocity (V sub 1 = h/2md sub 1) to Shock Loading Induced Reactions in Lead Azide

    DTIC Science & Technology

    1991-09-01

    CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP De Broglie Velocity Detonation Particle...Velocity Shock Induced Reaction I Lead Azide 19. ABSTRACT (Continue on reverse if necessary and identify by biock number) Availabl e experimental shock...induced reactive pressure levels for dextrinated and single crystal lead azide are compared to predicted Pv1 magnitudes. PV1 = P. CL V1 where V, = h

  3. Burma on the Brink: Complications for U.S. Policy in Burma

    DTIC Science & Technology

    1991-12-01

    Io 11. i’irdt% v llioi!; I’ ,tviii L V jiilic’ is proposed AhltiIh awpiuiits Ifir Lt, poliiia IInstability and ecihoiiiii tietclir IBurtia A hilt...l’rditw it loti I.te int’l Wi. i.4.tiaSlu i . 0’.2~ A ithiir edit ui, rte obiett L CL.\\SSII1󈧏-1) Approved for public release; distribution is...AvailabilitY Codes Avall and/or Dist Special L .fV TABLE OF CONTENTS I. INTRODUCTION: A NATION IN TURMOIL..................... 1 II. HISORICAL BACKGROUND: A

  4. Integrating technology to improve medication administration.

    PubMed

    Prusch, Amanda E; Suess, Tina M; Paoletti, Richard D; Olin, Stephen T; Watts, Starann D

    2011-05-01

    The development, implementation, and evaluation of an i.v. interoperability program to advance medication safety at the bedside are described. I.V. interoperability integrates intelligent infusion devices (IIDs), the bar-code-assisted medication administration system, and the electronic medication administration record system into a bar-code-driven workflow that populates provider-ordered, pharmacist-validated infusion parameters on IIDs. The purpose of this project was to improve medication safety through the integration of these technologies and decrease the potential for error during i.v. medication administration. Four key phases were essential to developing and implementing i.v. interoperability: (a) preparation, (b) i.v. interoperability pilot, (c) preliminary validation, and (d) expansion. The establishment of pharmacy involvement in i.v. interoperability resulted in two additional safety checks: pharmacist infusion rate oversight and nurse independent validation of the autoprogrammed rate. After instituting i.v. interoperability, monthly compliance to the telemetry drug library increased to a mean ± S.D. of 72.1% ± 2.1% from 56.5% ± 1.5%, and the medical-surgical nursing unit's drug library monthly compliance rate increased to 58.6% ± 2.9% from 34.1% ± 2.6% (p < 0.001 for both comparisons). The number of manual pump edits decreased with both telemetry and medical-surgical drug libraries, demonstrating a reduction from 56.9 ± 12.8 to 14.2 ± 3.9 and from 61.2 ± 15.4 to 14.7 ± 3.8, respectively (p < 0.001 for both comparisons). Through the integration and incorporation of pharmacist oversight for rate changes, the telemetry and medical-surgical patient care areas demonstrated a 32% reduction in reported monthly errors involving i.v. administration of heparin. By integrating two stand-alone technologies, i.v. interoperability was implemented to improve medication administration. Medication errors were reduced, nursing workflow was simplified, and pharmacists became involved in checking infusion rates of i.v. medications.

  5. A Green's function method for heavy ion beam transport

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Wilson, J. W.; Schimmerling, W.; Shavers, M. R.; Miller, J.; Benton, E. V.; Frank, A. L.; Badavi, F. F.

    1995-01-01

    The use of Green's function has played a fundamental role in transport calculations for high-charge high-energy (HZE) ions. Two recent developments have greatly advanced the practical aspects of implementation of these methods. The first was the formulation of a closed-form solution as a multiple fragmentation perturbation series. The second was the effective summation of the closed-form solution through nonperturbative techniques. The nonperturbative methods have been recently extended to an inhomogeneous, two-layer transport media to simulate the lead scattering foil present in the Lawrence Berkeley Laboratories (LBL) biomedical beam line used for cancer therapy. Such inhomogeneous codes are necessary for astronaut shielding in space. The transport codes utilize the Langley Research Center atomic and nuclear database. Transport code and database evaluation are performed by comparison with experiments performed at the LBL Bevalac facility using 670 A MeV 20Ne and 600 A MeV 56Fe ion beams. The comparison with a time-of-flight and delta E detector measurement for the 20Ne beam and the plastic nuclear track detectors for 56Fe show agreement up to 35%-40% in water and aluminium targets, respectively.

  6. Theory and computation of general force balance in non-axisymmetric tokamak equilibria

    NASA Astrophysics Data System (ADS)

    Park, Jong-Kyu; Logan, Nikolas; Wang, Zhirui; Kim, Kimin; Boozer, Allen; Liu, Yueqiang; Menard, Jonathan

    2014-10-01

    Non-axisymmetric equilibria in tokamaks can be effectively described by linearized force balance. In addition to the conventional isotropic pressure force, there are three important components that can strongly contribute to the force balance; rotational, anisotropic tensor pressure, and externally given forces, i.e. ∇ --> p + ρv-> . ∇ --> v-> + ∇ --> . <-->Π + f-> = j-> × B-> , especially in, but not limited to, high β and rotating plasmas. Within the assumption of nested flux surfaces, Maxwell equations and energy minimization lead to the modified-generalized Newcomb equation for radial displacements with simple algebraic relations for perpendicular and parallel displacements, including an inhomogeneous term if any of the forces are not explicitly dependent on displacements. The general perturbed equilibrium code (GPEC) solves this force balance consistent with energy and torque given by external perturbations. Local and global behaviors of solutions will be discussed when ∇ --> . <-->Π is solved by the semi-analytic code PENT and will be compared with MARS-K. Any first-principle transport code calculating ∇ --> . <-->Π or f-> , e.g. POCA, can also be incorporated without demanding iterations. This work was supported by DOE Contract DE-AC02-09CH11466.

  7. Measurement of charge- and mass-changing cross sections for 4He+12C collisions in the energy range 80-220 MeV/u for applications in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Horst, Felix; Schuy, Christoph; Weber, Uli; Brinkmann, Kai-Thomas; Zink, Klemens

    2017-08-01

    4He ions are considered to be used for hadron radiotherapy due to their favorable physical and radiobiological properties. For an accurate dose calculation the fragmentation of the primary 4He ions occurring as a result of nuclear collisions must be taken into account. Therefore precise nuclear reaction models need to be implemented in the radiation transport codes used for dose calculation. A fragmentation experiment using thin graphite targets was conducted at the Heidelberg Ion Beam Therapy Center (HIT) to obtain new and precise 4He-nucleus cross section data in the clinically relevant energy range. Measured values for the charge-changing cross section, mass-changing cross section, as well as the inclusive 3He production cross section for 4He+12C collisions at energies between 80 and 220 MeV /u are presented. These data are compared to the 4He-nucleus reaction model by DeVries and Peng as well as to the parametrizations by Tripathi et al. and by Cucinotta et al., which are implemented in the treatment planning code trip98 and several other radiation transport codes.

  8. A comparison between detailed and configuration-averaged collisional-radiative codes applied to nonlocal thermal equilibrium plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poirier, M.; Gaufridy de Dortan, F. de

    A collisional-radiative model describing nonlocal-thermodynamic-equilibrium plasmas is developed. It is based on the HULLAC (Hebrew University Lawrence Livermore Atomic Code) suite for the transitions rates, in the zero-temperature radiation field hypothesis. Two variants of the model are presented: the first one is configuration averaged, while the second one is a detailed level version. Comparisons are made between them in the case of a carbon plasma; they show that the configuration-averaged code gives correct results for an electronic temperature T{sub e}=10 eV (or higher) but fails at lower temperatures such as T{sub e}=1 eV. The validity of the configuration-averaged approximation ismore » discussed: the intuitive criterion requiring that the average configuration-energy dispersion must be less than the electron thermal energy turns out to be a necessary but far from sufficient condition. Another condition based on the resolution of a modified rate-equation system is proposed. Its efficiency is emphasized in the case of low-temperature plasmas. Finally, it is shown that near-threshold autoionization cascade processes may induce a severe failure of the configuration-average formalism.« less

  9. A single U/C nucleotide substitution changing alanine to valine in the beet necrotic yellow vein virus P25 protein promotes increased virus accumulation in roots of mechanically inoculated, partially resistant sugar beet seedlings.

    PubMed

    Koenig, R; Loss, S; Specht, J; Varrelmann, M; Lüddecke, P; Deml, G

    2009-03-01

    Beet necrotic yellow vein virus (BNYVV) A type isolates E12 and S8, originating from areas where resistance-breaking had or had not been observed, respectively, served as starting material for studying the influence of sequence variations in BNYVV RNA 3 on virus accumulation in partially resistant sugar beet varieties. Sub-isolates containing only RNAs 1 and 2 were obtained by serial local lesion passages; biologically active cDNA clones were prepared for RNAs 3 which differed in their coding sequences for P25 aa 67, 68 and 129. Sugar beet seedlings were mechanically inoculated with RNA 1+2/RNA 3 pseudorecombinants. The origin of RNAs 1+2 had little influence on virus accumulation in rootlets. E12 RNA 3 coding for V(67)C(68)Y(129) P25, however, enabled a much higher virus accumulation than S8 RNA 3 coding for A(67)H(68)H(129) P25. Mutants revealed that this was due only to the V(67) 'GUU' codon as opposed to the A(67) 'GCU' codon.

  10. Edge gyrokinetic theory and continuum simulations

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.; Xiong, Z.; Dorr, M. R.; Hittinger, J. A.; Bodi, K.; Candy, J.; Cohen, B. I.; Cohen, R. H.; Colella, P.; Kerbel, G. D.; Krasheninnikov, S.; Nevins, W. M.; Qin, H.; Rognlien, T. D.; Snyder, P. B.; Umansky, M. V.

    2007-08-01

    The following results are presented from the development and application of TEMPEST, a fully nonlinear (full-f) five-dimensional (3d2v) gyrokinetic continuum edge-plasma code. (1) As a test of the interaction of collisions and parallel streaming, TEMPEST is compared with published analytic and numerical results for endloss of particles confined by combined electrostatic and magnetic wells. Good agreement is found over a wide range of collisionality, confining potential and mirror ratio, and the required velocity space resolution is modest. (2) In a large-aspect-ratio circular geometry, excellent agreement is found for a neoclassical equilibrium with parallel ion flow in the banana regime with zero temperature gradient and radial electric field. (3) The four-dimensional (2d2v) version of the code produces the first self-consistent simulation results of collisionless damping of geodesic acoustic modes and zonal flow (Rosenbluth-Hinton residual) with Boltzmann electrons using a full-f code. The electric field is also found to agree with the standard neoclassical expression for steep density and ion temperature gradients in the plateau regime. In divertor geometry, it is found that the endloss of particles and energy induces parallel flow stronger than the core neoclassical predictions in the SOL.

  11. A Multilevel Shape Fit Analysis of Neutron Transmission Data

    NASA Astrophysics Data System (ADS)

    Naguib, K.; Sallam, O. H.; Adib, M.; Ashry, A.

    A multilevel shape fit analysis of neutron transmission data is presented. A multilevel computer code SHAPE is used to analyse clean transmission data obtained from time-of-flight (TOF) measurements. The shape analysis deduces the parameters of the observed resonances in the energy region considered in the measurements. The shape code is based upon a least square fit of a multilevel Briet-Wigner formula and includes both instrumental resolution and Doppler broadenings. Operating the SHAPE code on a test example of a measured transmission data of 151Eu, 153Eu and natural Eu in the energy range 0.025-1 eV accquired a good result for the used technique of analysis.Translated AbstractAnalyse von Neutronentransmissionsdaten mittels einer VielniveauformanpassungNeutronentransmissionsdaten werden in einer Vielniveauformanpassung analysiert. Dazu werden bereinigte Daten aus Flugzeitmessungen mit dem Rechnerprogramm SHAPE bearbeitet. Man erhält die Parameter der beobachteten Resonanzen im gemessenen Energiebereich. Die Formanpassung benutzt eine Briet-Wignerformel und berücksichtigt Linienverbreiterungen infolge sowohl der Meßeinrichtung als auch des Dopplereffekts. Als praktisches Beispiel werden 151Eu, 153Eu und natürliches Eu im Energiebereich 0.025 bis 1 eV mit guter Übereinstimmung theoretischer und experimenteller Werte behandelt.

  12. High-Current-Density Thermionic Cathodes and the Generation of High-Voltage Electron Beams

    DTIC Science & Technology

    1989-04-30

    Cathode Temperature =1700 OC Figure 37: Peak gun voltage = 90 kV -57- 60- 0 EGUN 327 ~40 0S 20’ Vacuum 5 .2 x 10 Tor 0 o 0 15202 30 Time (jis...by modeling the filament as a thin disk. The shape of the H - V -, 2 actual filament is sketched in Fig. 2. The EGUN code 1 131 is used to calculate

  13. Astrochemistry in the Early Universe: Collisional Rates for H on H2

    NASA Technical Reports Server (NTRS)

    Lepp, S. H.; Archer, D.; Balakrishnan, N.

    2006-01-01

    We present preliminary results of a full quantum calculation of state to state cross sections for H on H2. These cross sections are calculated for v=0,4 j=0,15 for energies up to 3.0 eV. The cross sections are calculated on the BKMP2 potential surface (Boothroyd et al. 1996) with the ABC scattering code (Skouteris et al. 2000).

  14. Joint Services Electronics Program.

    DTIC Science & Technology

    1986-01-01

    89 IAooeston ? N1TIS GRA&If : i TC TAB 17 Distribuitioll/ Avatlabllity Codes_. iAv il and/or Dist Special . iii V’/-. *’V*. ’/ ’ 2 ...Similar structures were also studied by direct reflectance measurements at 2 K where the excitonic transitions are so strong that modu- lation is...separate investigation. single quantum wells of varying sizes were grown and studied [ 2 ]. The binding energies of acceptors were also determined. \\ore

  15. Guide to Using Sierra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Ryan Phillip; Agelastos, Anthony Michael; Miller, Joel D.

    2015-03-01

    Sierra is an engineering mechanics simulation code suite supporting the Nation's Nuclear Weapons mission as well as other customers. It has explicit ties to Sandia National Labs' workfow, including geometry and meshing, design and optimization, and visualization. Dis- tinguishing strengths include "application aware" development, scalability, SQA and V&V, multiple scales, and multi-physics coupling. This document is intended to help new and existing users of Sierra as a user manual and troubleshooting guide.

  16. 2018 Ground Robotics Capabilities Conference and Exhibiton

    DTIC Science & Technology

    2018-04-11

    Transportable Robot System (MTRS) Inc 1 Non -standard Equipment (approved) Explosive Ordnance Disposal Common Robotic System-Heavy (CRS-H) Inc 1 AROC: 3-Star...and engineering • AI risk mitigation methodologies and techniques are at best immature – E.g., V&V; Probabilistic software analytics; code level...controller to minimize potential UxS mishaps and unauthorized Command and Control (C2). • PSP-10 – Ensure that software systems which exhibit non

  17. Post Flood Report March/April 1987.

    DTIC Science & Technology

    1987-04-01

    JUSTIICATION ELECTE _ (SJUN .3 U DISIRIBWEION / NSRCTIEDo / AVAILABILIT CODES DISr AVAIL AND/OR SPECIALHOORHT___ETADREUNTD_-_____ DATE ACCESSIONED...Cotter - Secretary Appreciation is extended to Lawrence Bergen, Chief, Water Control Branch, who gave valuable assistance to RCC during the flood. Cover...photo of spillway discharge at Knightville Dam taken by Leo Milette on 6 April 1987 at 1320 hours with 1.9 feet of water over the crest. V.- V 8 8 4*1

  18. Guide to Using Sierra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Ryan Phillip; Agelastos, Anthony Michael; Miller, Joel D.

    2017-04-01

    Sierra is an engineering mechanics simulation code suite supporting the Nation's Nuclear Weapons mission as well as other customers. It has explicit ties to Sandia National Labs' workfow, including geometry and meshing, design and optimization, and visualization. Dis- tinguishing strengths include "application aware" development, scalability, SQA and V&V, multiple scales, and multi-physics coupling. This document is intended to help new and existing users of Sierra as a user manual and troubleshooting guide.

  19. 75 FR 1412 - Notice of Lodging of Consent Decree Under the Comprehensive Environmental Response, Compensation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-11

    ... Decree in United States v. Highview Gardens, Inc., Civil Action No. 2:09-cv-02827-PD was lodged with the... States v. Highview Gardens, Inc., Civil Action No. 2:09-cv- 02827-PD, D.J. Ref. 90-11-2-902/4. The..., Environment and Natural Resources Division. [FR Doc. 2010-175 Filed 1-8-10; 8:45 am] BILLING CODE 4410-15-P ...

  20. Determination of differential cross sections for electron-impact excitation of electronic states of molecular oxygen

    NASA Astrophysics Data System (ADS)

    Campbell, L.; Green, M. A.; Brunger, M. J.; Teubner, P. J.; Cartwright, D. C.

    2000-02-01

    The development and initial results of a method for the determination of differential cross sections for electron scattering by molecular oxygen are described. The method has been incorporated into an existing package of computer programs which, given spectroscopic factors, dissociation energies and an energy-loss spectrum for electron-impact excitation, determine the differential cross sections for each electronic state relative to that of the elastic peak. Enhancements of the original code were made to deal with particular aspects of electron scattering from O2, such as the overlap of vibrational levels of the ground state with transitions to excited states, and transitions to levels close to and above the dissocation energy in the Herzberg and Schumann-Runge continua. The utility of the code is specifically demonstrated for the ``6-eV states'' of O2, where we report absolute differential cross sections for their excitation by 15-eV electrons. In addition an integral cross section, derived from the differential cross section measurements, is also reported for this excitation process and compared against available theoretical results. The present differential and integral cross sections for excitation of the ``6-eV states'' of O2 are the first to be reported in the literature for electron-impact energies below 20 eV.

  1. Measurement of the 234U(n, f ) cross-section with quasi-monoenergetic beams in the keV and MeV range using a Micromegas detector assembly

    NASA Astrophysics Data System (ADS)

    Stamatopoulos, A.; Kanellakopoulos, A.; Kalamara, A.; Diakaki, M.; Tsinganis, A.; Kokkoris, M.; Michalopoulou, V.; Axiotis, M.; Lagoyiannis, A.; Vlastou, R.

    2018-01-01

    The 234U neutron-induced fission cross-section has been measured at incident neutron energies of 452, 550, 651 keV and 7.5, 8.7, 10 MeV using the 7Li ( p, n) and the 2H( d, n) reactions, respectively, relative to the 235U( n, f ) and 238U( n, f ) reference reactions. The measurement was performed at the neutron beam facility of the National Center for Scientific Research "Demokritos", using a set-up based on Micromegas detectors. The active mass of the actinide samples and the corresponding impurities were determined via α-spectroscopy using a surface barrier silicon detector. The neutron spectra intercepted by the actinide samples have been thoroughly studied by coupling the NeuSDesc and MCNP5 codes, taking into account the energy and angular straggling of the primary ion beams in the neutron source targets in addition to contributions from competing reactions ( e.g. deuteron break-up) and neutron scattering in the surrounding materials. Auxiliary Monte Carlo simulations were performed making combined use of the FLUKA and GEF codes, focusing particularly on the determination of the fission fragment detection efficiency. The developed methodology and the final results are presented.

  2. 76 FR 48949 - Office of Thrift Supervision Integration Pursuant to the Dodd-Frank Wall Street Reform and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-09

    ...Pursuant to Title III of the Dodd-Frank Wall Street Reform and Consumer Protection Act, all functions of the Office of Thrift Supervision (OTS) relating to Federal savings associations and the rulemaking authority of the OTS relating to all savings associations are transferred to the Office of the Comptroller of the Currency (OCC) on July 21, 2011 (transfer date). In order to facilitate the OCC's enforcement and administration of former OTS rules and to make appropriate changes to these rules to reflect OCC supervision of Federal savings associations as of the transfer date, the OCC is republishing, with nomenclature and other technical changes, the OTS regulations currently found in Chapter V of Title 12 of the Code of Federal Regulations. The republished regulations will be recodified with the OCC's regulations in Chapter I at parts 100 through 197 (Republished Regulations), effective on July 21, 2011. The Republished Regulations will supersede the OTS regulations in Chapter V for purposes of OCC supervision and regulation of Federal savings associations, and certain of the Republished Rules will supersede the OTS regulations in Chapter V for purposes of the FDIC's supervision of state savings associations. Chapter V of Title 12 of the Code of Federal Regulations will be vacated at a later date.

  3. Predictive Feedback Can Account for Biphasic Responses in the Lateral Geniculate Nucleus

    PubMed Central

    Jehee, Janneke F. M.; Ballard, Dana H.

    2009-01-01

    Biphasic neural response properties, where the optimal stimulus for driving a neural response changes from one stimulus pattern to the opposite stimulus pattern over short periods of time, have been described in several visual areas, including lateral geniculate nucleus (LGN), primary visual cortex (V1), and middle temporal area (MT). We describe a hierarchical model of predictive coding and simulations that capture these temporal variations in neuronal response properties. We focus on the LGN-V1 circuit and find that after training on natural images the model exhibits the brain's LGN-V1 connectivity structure, in which the structure of V1 receptive fields is linked to the spatial alignment and properties of center-surround cells in the LGN. In addition, the spatio-temporal response profile of LGN model neurons is biphasic in structure, resembling the biphasic response structure of neurons in cat LGN. Moreover, the model displays a specific pattern of influence of feedback, where LGN receptive fields that are aligned over a simple cell receptive field zone of the same polarity decrease their responses while neurons of opposite polarity increase their responses with feedback. This phase-reversed pattern of influence was recently observed in neurophysiology. These results corroborate the idea that predictive feedback is a general coding strategy in the brain. PMID:19412529

  4. THE McGill PLANAR HYDROGEN ATMOSPHERE CODE (McPHAC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.

    2012-04-10

    The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation, at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; andmore » (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to {approx}1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to {approx}<1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T{sub eff} < 10{sup 5.6} K, though even there it may not be of much practical importance for most observations.« less

  5. The McGill Planar Hydrogen Atmosphere Code (McPHAC)

    NASA Astrophysics Data System (ADS)

    Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.; Rutledge, Robert E.

    2012-04-01

    The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation, at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.

  6. McPHAC: McGill Planar Hydrogen Atmosphere Code

    NASA Astrophysics Data System (ADS)

    Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.; Rutledge, Robert E.

    2012-10-01

    The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.

  7. Anodizing color coded anodized Ti6Al4V medical devices for increasing bone cell functions

    PubMed Central

    Ross, Alexandra P; Webster, Thomas J

    2013-01-01

    Current titanium-based implants are often anodized in sulfuric acid (H2SO4) for color coding purposes. However, a crucial parameter in selecting the material for an orthopedic implant is the degree to which it will integrate into the surrounding bone. Loosening at the bone–implant interface can cause catastrophic failure when motion occurs between the implant and the surrounding bone. Recently, a different anodization process using hydrofluoric acid has been shown to increase bone growth on commercially pure titanium and titanium alloys through the creation of nanotubes. The objective of this study was to compare, for the first time, the influence of anodizing a titanium alloy medical device in sulfuric acid for color coding purposes, as is done in the orthopedic implant industry, followed by anodizing the device in hydrofluoric acid to implement nanotubes. Specifically, Ti6Al4V model implant samples were anodized first with sulfuric acid to create color-coding features, and then with hydrofluoric acid to implement surface features to enhance osteoblast functions. The material surfaces were characterized by visual inspection, scanning electron microscopy, contact angle measurements, and energy dispersive spectroscopy. Human osteoblasts were seeded onto the samples for a series of time points and were measured for adhesion and proliferation. After 1 and 2 weeks, the levels of alkaline phosphatase activity and calcium deposition were measured to assess the long-term differentiation of osteoblasts into the calcium depositing cells. The results showed that anodizing in hydrofluoric acid after anodizing in sulfuric acid partially retains color coding and creates unique surface features to increase osteoblast adhesion, proliferation, alkaline phosphatase activity, and calcium deposition. In this manner, this study provides a viable method to anodize an already color coded, anodized titanium alloy to potentially increase bone growth for numerous implant applications. PMID:23319862

  8. Anodizing color coded anodized Ti6Al4V medical devices for increasing bone cell functions.

    PubMed

    Ross, Alexandra P; Webster, Thomas J

    2013-01-01

    Current titanium-based implants are often anodized in sulfuric acid (H(2)SO(4)) for color coding purposes. However, a crucial parameter in selecting the material for an orthopedic implant is the degree to which it will integrate into the surrounding bone. Loosening at the bone-implant interface can cause catastrophic failure when motion occurs between the implant and the surrounding bone. Recently, a different anodization process using hydrofluoric acid has been shown to increase bone growth on commercially pure titanium and titanium alloys through the creation of nanotubes. The objective of this study was to compare, for the first time, the influence of anodizing a titanium alloy medical device in sulfuric acid for color coding purposes, as is done in the orthopedic implant industry, followed by anodizing the device in hydrofluoric acid to implement nanotubes. Specifically, Ti6Al4V model implant samples were anodized first with sulfuric acid to create color-coding features, and then with hydrofluoric acid to implement surface features to enhance osteoblast functions. The material surfaces were characterized by visual inspection, scanning electron microscopy, contact angle measurements, and energy dispersive spectroscopy. Human osteoblasts were seeded onto the samples for a series of time points and were measured for adhesion and proliferation. After 1 and 2 weeks, the levels of alkaline phosphatase activity and calcium deposition were measured to assess the long-term differentiation of osteoblasts into the calcium depositing cells. The results showed that anodizing in hydrofluoric acid after anodizing in sulfuric acid partially retains color coding and creates unique surface features to increase osteoblast adhesion, proliferation, alkaline phosphatase activity, and calcium deposition. In this manner, this study provides a viable method to anodize an already color coded, anodized titanium alloy to potentially increase bone growth for numerous implant applications.

  9. TH-A-19A-11: Validation of GPU-Based Monte Carlo Code (gPMC) Versus Fully Implemented Monte Carlo Code (TOPAS) for Proton Radiation Therapy: Clinical Cases Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D; Schuemann, J; Dowdell, S

    Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavitiesmore » and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.« less

  10. The Hippo pathway in hepatocellular carcinoma: Non-coding RNAs in action.

    PubMed

    Shi, Xuan; Zhu, Hai-Rong; Liu, Tao-Tao; Shen, Xi-Zhong; Zhu, Ji-Min

    2017-08-01

    Hepatocellular carcinoma (HCC) is the sixth most common cancer and the third leading cause of cancer-related death worldwide. However, current strategies curing HCC are far from satisfaction. The Hippo pathway is an evolutionarily conserved tumor suppressive pathway that plays crucial roles in organ size control and tissue homeostasis. Its dysregulation is commonly observed in various types of cancer including HCC. Recently, the prominent role of non-coding RNAs in the Hippo pathway during normal development and neoplastic progression is also emerging in liver. Thus, further investigation into the regulatory network between non-coding RNAs and the Hippo pathway and their connections with HCC may provide new therapeutic avenues towards developing an effective preventative or perhaps curative treatment for HCC. Herein we summarize the role of non-coding RNAs in the Hippo pathway, with an emphasis on their contribution to carcinogenesis, diagnosis, treatment and prognosis of HCC. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Sandia Simple Particle Tracking (Sandia SPT) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen M.

    2015-06-15

    Sandia SPT is designed as software to accompany a book chapter being published a methods chapter which provides an introduction on how to label and track individual proteins. The Sandia Simple Particle Tracking code uses techniques common to the image processing community, where its value is that it facilitates implementing the methods described in the book chapter by providing the necessary open-source code. The code performs single particle spot detection (or segmentation and localization) followed by tracking (or connecting the detected particles into trajectories). The book chapter, which along with the headers in each file, constitutes the documentation for themore » code is: Anthony, S.M.; Carroll-Portillo, A.; Timlon, J.A., Dynamics and Interactions of Individual Proteins in the Membrane of Living Cells. In Anup K. Singh (Ed.) Single Cell Protein Analysis Methods in Molecular Biology. Springer« less

  12. Multi-scale modeling of irradiation effects in spallation neutron source materials

    NASA Astrophysics Data System (ADS)

    Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.

    2011-07-01

    Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.

  13. SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantzis, G; Leventouri, T; Tachibana, H

    Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction whilemore » the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms.« less

  14. Current status of kilovoltage (kV) radiotherapy in the UK: installed equipment, clinical workload, physics quality control and radiation dosimetry.

    PubMed

    Palmer, Antony L; Pearson, Michael; Whittard, Paul; McHugh, Katie E; Eaton, David J

    2016-12-01

    To assess the status and practice of kilovoltage (kV) radiotherapy in the UK. 96% of the radiotherapy centres in the UK responded to a comprehensive survey. An analysis of the installed equipment base, patient numbers, clinical treatment sites, quality control (QC) testing and radiation dosimetry processes were undertaken. 73% of UK centres have at least one kV treatment unit, with 58 units installed across the UK. Although 35% of units are over 10 years old, 39% units have been installed in the last 5 years. Approximately 6000 patients are treated with kV units in the UK each year, the most common site (44%) being basal cell carcinoma. A benchmark of QC practice in the UK is presented, against which individual centres can compare their procedures, frequency of testing and acceptable tolerance values. We propose the use of internal "notification" and "suspension" levels for analysis. All surveyed centres were using recommended Codes of Practice for kV dosimetry in the UK; approximately the same number using in-air and in-water methodologies for medium energy, with two-thirds of all centres citing "clinical relevance" as the reason for choice of code. 64% of centres had hosted an external dosimetry audit within the last 3 years, with only one centre never being independently audited. The majority of centres use locally measured applicator factors and published backscatter factors for treatments. Monitor unit calculations are performed using software in only 36% of centres. A comprehensive review of current kV practice in the UK is presented. Advances in knowledge: Data and discussion on contemporary kV radiotherapy in the UK, with a particular focus on physics aspects.

  15. Current status of kilovoltage (kV) radiotherapy in the UK: installed equipment, clinical workload, physics quality control and radiation dosimetry

    PubMed Central

    Pearson, Michael; Whittard, Paul; McHugh, Katie E; Eaton, David J

    2016-01-01

    Objective: To assess the status and practice of kilovoltage (kV) radiotherapy in the UK. Methods: 96% of the radiotherapy centres in the UK responded to a comprehensive survey. An analysis of the installed equipment base, patient numbers, clinical treatment sites, quality control (QC) testing and radiation dosimetry processes were undertaken. Results: 73% of UK centres have at least one kV treatment unit, with 58 units installed across the UK. Although 35% of units are over 10 years old, 39% units have been installed in the last 5 years. Approximately 6000 patients are treated with kV units in the UK each year, the most common site (44%) being basal cell carcinoma. A benchmark of QC practice in the UK is presented, against which individual centres can compare their procedures, frequency of testing and acceptable tolerance values. We propose the use of internal “notification” and “suspension” levels for analysis. All surveyed centres were using recommended Codes of Practice for kV dosimetry in the UK; approximately the same number using in-air and in-water methodologies for medium energy, with two-thirds of all centres citing “clinical relevance” as the reason for choice of code. 64% of centres had hosted an external dosimetry audit within the last 3 years, with only one centre never being independently audited. The majority of centres use locally measured applicator factors and published backscatter factors for treatments. Monitor unit calculations are performed using software in only 36% of centres. Conclusion: A comprehensive review of current kV practice in the UK is presented. Advances in knowledge: Data and discussion on contemporary kV radiotherapy in the UK, with a particular focus on physics aspects. PMID:27730839

  16. NSTX-U Control System Upgrades

    DOE PAGES

    Erickson, K. G.; Gates, D. A.; Gerhardt, S. P.; ...

    2014-06-01

    The National Spherical Tokamak Experiment (NSTX) is undergoing a wealth of upgrades (NSTX-U). These upgrades, especially including an elongated pulse length, require broad changes to the control system that has served NSTX well. A new fiber serial Front Panel Data Port input and output (I/O) stream will supersede the aging copper parallel version. Driver support for the new I/O and cyber security concerns require updating the operating system from Redhat Enterprise Linux (RHEL) v4 to RedHawk (based on RHEL) v6. While the basic control system continues to use the General Atomics Plasma Control System (GA PCS), the effort to forwardmore » port the entire software package to run under 64-bit Linux instead of 32-bit Linux included PCS modifications subsequently shared with GA and other PCS users. Software updates focused on three key areas: (1) code modernization through coding standards (C99/C11), (2) code portability and maintainability through use of the GA PCS code generator, and (3) support of 64-bit platforms. Central to the control system upgrade is the use of a complete real time (RT) Linux platform provided by Concurrent Computer Corporation, consisting of a computer (iHawk), an operating system and drivers (RedHawk), and RT tools (NightStar). Strong vendor support coupled with an extensive RT toolset influenced this decision. The new real-time Linux platform, I/O, and software engineering will foster enhanced capability and performance for NSTX-U plasma control.« less

  17. Functional interplay of top-down attention with affective codes during visual short-term memory maintenance.

    PubMed

    Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu

    2018-06-01

    Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells

    PubMed Central

    Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz

    2014-01-01

    The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world. PMID:24810948

  19. Monte Carlo calculation of dose rate conversion factors for external exposure to photon emitters in soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clovas, A.; Zanthos, S.; Antonopoulos-Domis, M.

    2000-03-01

    The dose rate conversion factors {dot D}{sub CF} (absorbed dose rate in air per unit activity per unit of soil mass, nGy h{sup {minus}1} per Bq kg{sup {minus}1}) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: (1) The MCNP code of Los Alamos; (2) The GEANT code of CERN; and (3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained bymore » the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the {dot D}{sub CF} values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good agreement (less than 15% of difference) for photon energies above 1,500 keV. Antithetically, the agreement is not as good (difference of 20--30%) for the low energy photons.« less

  20. Evaluation of playground injuries based on ICD, E codes, international classification of external cause of injury codes (ICECI), and abbreviated injury scale coding systems.

    PubMed

    Tan, N C; Ang, A; Heng, D; Chen, J; Wong, H B

    2007-01-01

    The survey is aimed to describe the epidemiology of playground related injuries in Singapore based on the ICD-9, AIS/ ISS and PTS scoring systems, and mechanisms and causes of such injuries according to E codes and ICECI codes. A cross-sectional questionnaire survey examined children (< 16 years old), who sought treatment for or died of unintentional injuries in the ED of three hospitals, two primary care centers and the sole Forensic Medicine Department of Singapore. A data dictionary was compiled using guidelines from CDC/WHO. The ISS, AIS and PTS, ICD-9, ICECI v1 and E codes were used to describe the details of the injuries. 19,094 childhood injuries were recorded in the database, of which 1617 were playground injuries (8.5%). The injured children (mean age=6.8 years, SD 2.9 years) were predo-minantly male (M:F ratio = 1.71:1). Falls were the most frequent in-juries (70.7%) using ICECI. 25.0% of injuries involved radial and ulnar fractures (ICD-9 code). 99.4% of these injuries were minor, with PTS scores of 9-12. Children aged 6-10 years, were prone to upper limb injuries (71.1%) based on AIS. The use of international coding systems in injury surveillance facilitated standardisation of description and comparison of playground injuries.

  1. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  2. 40 CFR 158.32 - Format of data submissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... identification of the substance(s) tested and the test name or data requirement addressed. (ii) The author(s) of..., the name and address of the laboratory, project numbers or other identifying codes. (v) If the study...

  3. Kinetic neoclassical calculations of impurity radiation profiles

    DOE PAGES

    Stotler, D. P.; Battaglia, D. J.; Hager, R.; ...

    2016-12-30

    Modifications of the drift-kinetic transport code XGC0 to include the transport, ionization, and recombination of individual charge states, as well as the associated radiation, are described. The code is first applied to a simulation of an NSTX H-mode discharge with carbon impurity to demonstrate the approach to coronal equilibrium. The effects of neoclassical phenomena on the radiated power profile are examined sequentially through the activation of individual physics modules in the code. Orbit squeezing and the neoclassical inward pinch result in increased radiation for temperatures above a few hundred eV and changes to the ratios of charge state emissions atmore » a given electron temperature. As a result, analogous simulations with a neon impurity yield qualitatively similar results.« less

  4. Calculation of response matrix of CaSO 4:Dy based neutron dosimeter using Monte Carlo code FLUKA and measurement of 241Am-Be spectra

    NASA Astrophysics Data System (ADS)

    Chatterjee, S.; Bakshi, A. K.; Tripathy, S. P.

    2010-09-01

    Response matrix for CaSO 4:Dy based neutron dosimeter was generated using Monte Carlo code FLUKA in the energy range thermal to 20 MeV for a set of eight Bonner spheres of diameter 3-12″ including the bare one. Response of the neutron dosimeter was measured for the above set of spheres for 241Am-Be neutron source covered with 2 mm lead. An analytical expression for the response function was devised as a function of sphere mass. Using Frascati Unfolding Iteration Tool (FRUIT) unfolding code, the neutron spectrum of 241Am-Be was unfolded and compared with standard IAEA spectrum for the same.

  5. Hard gamma-ray background from the coding collimator of a gamma-ray telescope during in conditions of a space experiment

    NASA Astrophysics Data System (ADS)

    Aleksandrov, A. P.; Berezovoj, A. N.; Gal'Per, A. M.; Grachev, V. M.; Dmitrenko, V. V.; Kirillov-Ugryumov, V. G.; Lebedev, V. V.; Lyakhov, V. A.; Moiseev, A. A.; Ulin, S. E.; Shchvets, N. I.

    1984-11-01

    Coding collimators are used to improve the angular resolution of gamma-ray telescopes at energies above 50 MeV. However, the interaction of cosmic rays with the collimator material can lead to the appearance of a gramma-ray background flux which can have a deleterious effect on measurement efficiency. An experiment was performed on the Salyut-6-Soyuz spacecraft system with the Elena-F small-scale gamma-ray telescope in order to measure the magnitude of this background. It is shown that, even at a zenith angle of approximately zero degrees (the angle at which the gamma-ray observations are made), the coding collimator has only an insignificant effect on the background conditions.

  6. SPECabq v. 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chambers, Robert S.; Neidigk, Matthew A.

    Sandia SPECabq is FORTRAN code that defines the user supplied subroutines needed to perform nonlinear viscoelastic analyses in the ABAQUS commercial finite element code based on the Simplified Potential Energy Clock (SPEC) Model. The SPEC model was published in the open literature in 2009. It must be compiled and linked with the ABAQUS libraries under the user supplied subroutine option of the ABAQUS executable script. The subroutine is used to analyze the thermomechanical behavior of isotropic polymers predicting things like how a polymer may undergo stress or volume relaxation under different temperature and loading environments. This subroutine enables the ABAQUSmore » finite element code to be used for analyzing the thermo-mechanical behavior of samples and parts that are made from glassy polymers.« less

  7. Calculation of Dose for Skyshine Radiation From a 45 MeV Electron LINAC

    NASA Astrophysics Data System (ADS)

    Hori, M.; Hikoji, M.; Takahashi, H.; Takahashi, K.; Kitaichi, M.; Sawamura, S.; Nojiri, I.

    1996-11-01

    Dose estimation for skyshine plays an important role in the evaluation of the environment around nuclear facilities. We performed calculations for the skyshine radiation from a Hokkaido University 45 MeV linear accelerator using a general purpose user's version of the EGS4 Monte Carlo Code. To verify accuracy of the code, the simulation results have been compared with our experimental results, in which a gated counting method was used to measure low-level pulsed leakage radiation. In experiment, measurements were carried out up to 600 m away from the LINAC. The simulation results are consistent with the experimental values at the distance between 100 and 400 m from the LINAC. However, agreements of both results up to 100 m from the LINAC are not as good because of the simplification of geometrical modeling in the simulation. It could be said that it is useful to apply this version to the calculation for skyshine.

  8. Characteristic evaluation of a Lithium-6 loaded neutron coincidence spectrometer.

    PubMed

    Hayashi, M; Kaku, D; Watanabe, Y; Sagara, K

    2007-01-01

    Characteristics of a (6)Li-loaded neutron coincidence spectrometer were investigated from both measurements and Monte Carlo simulations. The spectrometer consists of three (6)Li-glass scintillators embedded in a liquid organic scintillator BC-501A, which can detect selectively neutrons that deposit the total energy in the BC-501A using a coincidence signal generated from the capture event of thermalised neutrons in the (6)Li-glass scintillators. The relative efficiency and the energy response were measured using 4.7, 7.2 and 9.0 MeV monoenergetic neutrons. The measured ones were compared with the Monte Carlo calculations performed by combining the neutron transport code PHITS and the scintillator response calculation code SCINFUL. The experimental light output spectra were in good agreement with the calculated ones in shape. The energy dependence of the detection efficiency was reproduced by the calculation. The response matrices for 1-10 MeV neutrons were finally obtained.

  9. A multi-detector neutron spectrometer with nearly isotropic response for environmental and workplace monitoring

    NASA Astrophysics Data System (ADS)

    Gómez-Ros, J. M.; Bedogni, R.; Moraleda, M.; Delgado, A.; Romero, A.; Esposito, A.

    2010-01-01

    This communication describes an improved design for a neutron spectrometer consisting of 6Li thermoluminescent dosemeters located at selected positions within a single moderating polyethylene sphere. The spatial arrangement of the dosemeters has been designed using the MCNPX Monte Carlo code to calculate the response matrix for 56 log-equidistant energies from 10 -9 to 100 MeV, looking for a configuration that permits to obtain a nearly isotropic response for neutrons in the energy range from thermal to 20 MeV. The feasibility of the proposed spectrometer and the isotropy of its response have been evaluated by simulating exposures to different reference and workplace neutron fields. The FRUIT code has been used for unfolding purposes. The results of the simulations as well as the experimental tests confirm the suitability of the prototype for environmental and workplace monitoring applications.

  10. Using computational modeling to compare X-ray tube Practical Peak Voltage for Dental Radiology

    NASA Astrophysics Data System (ADS)

    Holanda Cassiano, Deisemar; Arruda Correa, Samanda Cristine; de Souza, Edmilson Monteiro; da Silva, Ademir Xaxier; Pereira Peixoto, José Guilherme; Tadeu Lopes, Ricardo

    2014-02-01

    The Practical Peak Voltage-PPV has been adopted to measure the voltage applied to an X-ray tube. The PPV was recommended by the IEC document and accepted and published in the TRS no. 457 code of practice. The PPV is defined and applied to all forms of waves and is related to the spectral distribution of X-rays and to the properties of the image. The calibration of X-rays tubes was performed using the MCNPX Monte Carlo code. An X-ray tube for Dental Radiology (operated from a single phase power supply) and an X-ray tube used as a reference (supplied from a constant potential power supply) were used in simulations across the energy range of interest of 40 kV to 100 kV. Results obtained indicated a linear relationship between the tubes involved.

  11. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.

    Proton double-differential cross sections from 59Co(α,p) 62Ni, 57Fe(α,p) 60Co, 56Fe( 7Li,p) 62Ni, and 55Mn( 6Li,p) 60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtainedmore » with a Monte Carlo technique. Furthermore, excitation energy dependencies were found to be inconsistent with the Fermi-gas model.« less

  12. Coded-aperture imaging of the Galactic center region at gamma-ray energies

    NASA Technical Reports Server (NTRS)

    Cook, Walter R.; Grunsfeld, John M.; Heindl, William A.; Palmer, David M.; Prince, Thomas A.

    1991-01-01

    The first coded-aperture images of the Galactic center region at energies above 30 keV have revealed two strong gamma-ray sources. One source has been identified with the X-ray source IE 1740.7 - 2942, located 0.8 deg away from the nucleus. If this source is at the distance of the Galactic center, it is one of the most luminous objects in the galaxy at energies from 35 to 200 keV. The second source is consistent in location with the X-ray source GX 354 + 0 (MXB 1728-34). In addition, gamma-ray flux from the location of GX 1 + 4 was marginally detected at a level consistent with other post-1980 measurements. No significant hard X-ray or gamma-ray flux was detected from the direction of the Galactic nucleus or from the direction of the recently discovered gamma-ray source GRS 1758-258.

  13. Element analysis and calculation of the attenuation coefficients for gold, bronze and water matrixes using MCNP, WinXCom and experimental data

    NASA Astrophysics Data System (ADS)

    Esfandiari, M.; Shirmardi, S. P.; Medhat, M. E.

    2014-06-01

    In this study, element analysis and the mass attenuation coefficient for matrixes of gold, bronze and water with various impurities and the concentrations of heavy metals (Cu, Mn, Pb and Zn) are evaluated and calculated by the MCNP simulation code for photons emitted from Barium-133, Americium-241 and sources with energies between 1 and 100 keV. The MCNP data are compared with the experimental data and WinXCom code simulated results by Medhat. The results showed that the obtained results of bronze and gold matrix are in good agreement with the other methods for energies above 40 and 60 keV, respectively. However for water matrixes with various impurities, there is a good agreement between the three methods MCNP, WinXCom and the experimental one in low and high energies.

  14. Activation of accelerator construction materials by heavy ions

    NASA Astrophysics Data System (ADS)

    Katrík, P.; Mustafin, E.; Hoffmann, D. H. H.; Pavlovič, M.; Strašík, I.

    2015-12-01

    Activation data for an aluminum target irradiated by 200 MeV/u 238U ion beam are presented in the paper. The target was irradiated in the stacked-foil geometry and analyzed using gamma-ray spectroscopy. The purpose of the experiment was to study the role of primary particles, projectile fragments, and target fragments in the activation process using the depth profiling of residual activity. The study brought information on which particles contribute dominantly to the target activation. The experimental data were compared with the Monte Carlo simulations by the FLUKA 2011.2c.0 code. This study is a part of a research program devoted to activation of accelerator construction materials by high-energy (⩾200 MeV/u) heavy ions at GSI Darmstadt. The experimental data are needed to validate the computer codes used for simulation of interaction of swift heavy ions with matter.

  15. Carbon Radiation Studies in the DIII-D Divertor with the Monte Carlo Impurity (MCI) Code

    NASA Astrophysics Data System (ADS)

    Evans, T. E.; Leonard, A. W.; West, W. P.; Finkenthal, D. F.; Fenstermacher, M. E.; Porter, G. D.; Chu, Y.

    1998-11-01

    Carbon sputtering and transport are modeled in the DIII--D divertor with the MCI code. Calculated 2-D radiation patterns are compared with measured radiation distributions. The results are particularly sensitive to Ti near the divertor target plates. For example, increasing the ion temperature from 8 eV to 20 eV in MCI raises P_rad^div from 1626 to 2862 kW. Although this presents difficulties in assessing which sputtering model best describes the plasma-surface interaction physics (because of experimental uncertainties in T_i), processes which either produce too much or too little radiated power compared to the measured value of 1718 kW can be eliminated. Based on this, the number of viable sputtering options has been reduced from 12 to 4. For the conditions studied, three of these options involve both physical and chemical sputtering, and one requires only physical sputtering.

  16. Fast Faraday cup for fast ion beam TOF measurements in deuterium filled plasma focus device and correlation with Lee model

    NASA Astrophysics Data System (ADS)

    Damideh, Vahid; Ali, Jalil; Saw, Sor Heoh; Rawat, Rajdeep Singh; Lee, Paul; Chaudhary, Kashif Tufail; Rizvi, Zuhaib Haider; Dabagh, Shadab; Ismail, Fairuz Diyana; Sing, Lee

    2017-06-01

    In this work, the design and construction of a 50 Ω fast Faraday cup and its results in correlation with the Lee Model Code for fast ion beam and ion time of flight measurements for a Deuterium filled plasma focus device are presented. Fast ion beam properties such as ion flux, fluence, speed, and energy at 2-8 Torr Deuterium are studied. The minimum 34 ns full width at half maximum ion signal at 12 kV, 3 Torr Deuterium in INTI PF was captured by a Faraday cup. The maximum ion energy of 67 ± 5 keV at 4 Torr Deuterium was detected by the Faraday cup. Ion time of flight measurements by the Faraday cup show consistent correlation with Lee Code results for Deuterium especially at near to optimum pressures.

  17. Recoil Distance Lifetime Measurements of the Nuclei SAMARIUM-146 and EUROPIUM-147, Using Hydrogen-Iodide X Neutron Gamma) Reactions.

    NASA Astrophysics Data System (ADS)

    Rozak, Stephen

    Recoil-distance lifetime measurements have been performed on several levels in ('146)Sm and ('147)Eu, using the reactions ('139)La(('11)B,4n)('146)Sm and ('139)La(('12)C,4n)('147)Eu. The data were analyzed with an algorithm incorporated into a computer code PLUNGER that treats arbitrarily complex cascade feeding in a new, mathematically rigorous, formalism. Higher order corrections are also incorporated into the code and are discussed. The measured mean lifetimes in ('146)Sm are 7ps (2(,1)('+), 747.2keV), 3ps (4(,1)('+), 1381.2keV), 125ps (6(,1)('+), 1811.5keV), 15.7ps (7(,1)('-), 2600.3keV), 16.4ps (8(,1)('+), 2737.1keV), 1.2ns (9(,1)('-), 2797.6keV), 38.8ps (9(,2)('-), 3354.5keV), 14.5ps (11(,1)('-), 3783.5keV), 7.1ps (11(,2)('-), 4091.2keV), 2.3ps (12('-), 4461.4keV), and 7.6ps (13('-), 4628.8keV). The lifetimes measured in ('147)Eu are 6.8ps (15/2('-), 1346.7keV), 12.2ps (19/2(' -), 1926.9keV), 137ps (23/2('-), 2293.2keV), 72.2ps (27/2(' -), 2900.9keV), and 33ps (23/2('-), 2996.9keV). The results for ('146)Sm were compared to calculations of the IBA model and cluster-vibrator model. Both models have good success reproducing the data up to the 6('+) -4('+) transition. They both fail to reproduce the transition probabilities for the 8('+)-6('+) transition. The data also support the interpretation of the lowest negative parity levels (3('-), 5('-),7('-),9('-),11(' -)) as being a band composed of an octupole phonon coupled to the ground state band. The data support the interpretation of the 11/2(' -), 15/2('-), 19/2('-), 23/2('-), and 27/2('-) levels in ('147)Eu as comprised of a valence nucleon coupled to the 0('+), 2('+), 4('+), 6('+), and 8('+) levels in ('146)Sm. The success of this work also demonstrates that the feeding problem is not insurmountable when applying Doppler shift recoil-distance techniques to nuclei formed by (HI,xn) reactions, even when complicated decay schemes are involved.

  18. Measurement of Optical Radiations in Spacecraft Environments

    DTIC Science & Technology

    1989-06-15

    intended to test the Critical Ionization Velocity hypothesis, and thus could result in structured ion clouds), outgas contamination, and recombination...scene radiances from computer models of the exhaust and outgassing (such as the CHARM code), we are using the simple distribution of outgas applied...OH (A, v = 0) -> OH (X, v = 0) + hv (X = 3064 A [+-40 -0 A]). Our overall conclusion is, that water outgas rates greater than roughly I milligram per

  19. RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade

    DTIC Science & Technology

    2015-09-30

    NOAA ), Robin Hogan (ECMWF), a number of colleagues at the Max-Planck Institute, and Will Sawyer and Marcus Wetzstein (Swiss Supercomputer Center...somewhat out of date, so that the accuracy of our simplified algorithms can not be thoroughly evaluated. RRTMGP_LW_v0 has been provided to our NASA ...support, RRTMGP_LW_v0, has been completed and distributed to selected colleagues at modeling centers, including NOAA , NCAR, and CSCS. Our colleagues

  20. Population responses in V1 encode different figures by response amplitude.

    PubMed

    Gilad, Ariel; Slovin, Hamutal

    2015-04-22

    The visual system simultaneously segregates between several objects presented in a visual scene. The neural code for encoding different objects or figures is not well understood. To study this question, we trained two monkeys to discriminate whether two elongated bars are either separate, thus generating two different figures, or connected, thus generating a single figure. Using voltage-sensitive dyes, we imaged at high spatial and temporal resolution V1 population responses evoked by the two bars, while keeping their local attributes similar among the two conditions. In the separate condition, unlike the connected condition, the population response to one bar is enhanced, whereas the response to the other is simultaneously suppressed. The response to the background remained unchanged between the two conditions. This divergent pattern developed ∼200 ms poststimulus onset and could discriminate well between the separate and connected single trials. The stimulus separation saliency and behavioral report were highly correlated with the differential response to the bars. In addition, the proximity and/or the specific location of the connectors seemed to have only a weak effect on this late activity pattern, further supporting the involvement of top-down influences. Additional neural codes were less informative about the separate and connected conditions, with much less consistency and discriminability compared with a response amplitude code. We suggest that V1 is involved in the encoding of each figure by different neuronal response amplitude, which can mediate their segregation and perception. Copyright © 2015 the authors 0270-6474/15/356335-15$15.00/0.

Top