Sample records for talys reaction code

  1. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of 64Cu and 67Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    NASA Astrophysics Data System (ADS)

    Nasrabadi, M. N.; Sepiani, M.

    2015-03-01

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE & LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  2. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of {sup 64}Cu and {sup 67}Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasrabadi, M. N., E-mail: mnnasrabadi@ast.ui.ac.ir; Sepiani, M.

    2015-03-30

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  3. TALYS/TENDL verification and validation processes: Outcomes and recommendations

    NASA Astrophysics Data System (ADS)

    Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark R.; Koning, Arjan; Rochman, Dimitri

    2017-09-01

    The TALYS-generated Evaluated Nuclear Data Libraries (TENDL) provide truly general-purpose nuclear data files assembled from the outputs of the T6 nuclear model codes system for direct use in both basic physics and engineering applications. The most recent TENDL-2015 version is based on both default and adjusted parameters of the most recent TALYS, TAFIS, TANES, TARES, TEFAL, TASMAN codes wrapped into a Total Monte Carlo loop for uncertainty quantification. TENDL-2015 contains complete neutron-incident evaluations for all target nuclides with Z ≤116 with half-life longer than 1 second (2809 isotopes with 544 isomeric states), up to 200 MeV, with covariances and all reaction daughter products including isomers of half-life greater than 100 milliseconds. With the added High Fidelity Resonance (HFR) approach, all resonances are unique, following statistical rules. The validation of the TENDL-2014/2015 libraries against standard, evaluated, microscopic and integral cross sections has been performed against a newly compiled UKAEA database of thermal, resonance integral, Maxwellian averages, 14 MeV and various accelerator-driven neutron source spectra. This has been assembled using the most up-to-date, internationally-recognised data sources including the Atlas of Resonances, CRC, evaluated EXFOR, activation databases, fusion, fission and MACS. Excellent agreement was found with a small set of errors within the reference databases and TENDL-2014 predictions.

  4. Fission Activities of the Nuclear Reactions Group in Uppsala

    NASA Astrophysics Data System (ADS)

    Al-Adili, A.; Alhassan, E.; Gustavsson, C.; Helgesson, P.; Jansson, K.; Koning, A.; Lantz, M.; Mattera, A.; Prokofiev, A. V.; Rakopoulos, V.; Sjöstrand, H.; Solders, A.; Tarrío, D.; Österlund, M.; Pomp, S.

    This paper highlights some of the main activities related to fission of the nuclear reactions group at Uppsala University. The group is involved for instance in fission yield experiments at the IGISOL facility, cross-section measurements at the NFS facility, as well as fission dynamics studies at the IRMM JRC-EC. Moreover, work is ongoing on the Total Monte Carlo (TMC) methodology and on including the GEF fission code into the TALYS nuclear reaction code. Selected results from these projects are discussed.

  5. Teaching and Learning International Survey TALIS 2013: Conceptual Framework. Final

    ERIC Educational Resources Information Center

    Rutkowski, David; Rutkowski, Leslie; Bélanger, Julie; Knoll, Steffen; Weatherby, Kristen; Prusinski, Ellen

    2013-01-01

    In 2008, the initial cycle of the OECD's Teaching and Learning International Survey (TALIS 2008) established, for the first time, an international, large-scale survey of the teaching workforce, the conditions of teaching, and the learning environments of schools in participating countries. The second cycle of TALIS (TALIS 2013) aims to continue…

  6. Theoretical evaluation of the reaction rates for {sup 26}Al(n,p){sup 26}Mg and {sup 26}Al(n,{alpha}){sup 23}Na

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oginni, B. M.; Iliadis, C.; Champagne, A. E.

    2011-02-15

    The reactions that destroy {sup 26}Al in massive stars have significance in a number of astrophysical contexts. We evaluate the reaction rates of {sup 26}Al(n,p){sup 26}Mg and {sup 26}Al(n,{alpha}){sup 23}Na using cross sections obtained from the codes empire and talys. These have been compared to the published rates obtained from the non-smoker code and to some experimental data. We show that the results obtained from empire and talys are comparable to those from non-smoker. We also show how the theoretical results vary with respect to changes in the input parameters. Finally, we present recommended rates for these reactions using themore » available experimental data and our new theoretical results.« less

  7. School Leadership for Learning: Insights from TALIS 2013

    ERIC Educational Resources Information Center

    OECD Publishing, 2016

    2016-01-01

    The OECD Teaching and Learning International Survey (TALIS) is the largest international survey of teachers and school leaders. Using the TALIS database, this report looks at different approaches to school leadership and the impact of school leadership on professional learning communities and on the learning climate in individual schools. It looks…

  8. Special features of isomeric ratios in nuclear reactions induced by various projectile particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danagulyan, A. S.; Hovhannisyan, G. H., E-mail: hov-gohar@ysu.am; Bakhshiyan, T. M.

    2016-05-15

    Calculations for (p, n) and (α, p3n) reactions were performed with the aid of the TALYS-1.4 code. Reactions in which the mass numbers of target and product nuclei were identical were examined in the range of A = 44–124. Excitation functions were obtained for product nuclei in ground and isomeric states, and isomeric ratios were calculated. The calculated data reflect well the dependence of the isomeric ratios on the projectile type. A comparison of the calculated and experimental data reveals, that, for some nuclei in a high-spin state, the calculated data fall greatly short of their experimental counterparts. These discrepanciesmore » may be due to the presence of high-spin yrast states and rotational bands in these nuclei. Calculations involving various level-density models included in the TALYS-1.4 code with allowance for the enhancement of collective effects do not remove the discrepancies in the majority of cases.« less

  9. TALIS 2013 Technical Report: Teaching and Learning International Survey

    ERIC Educational Resources Information Center

    OECD Publishing, 2013

    2013-01-01

    Effective teaching and teachers are key to producing high-performing students worldwide. So how can countries prepare teachers to face the diverse challenges in today's schools? The Teaching and Learning International Survey (TALIS) helps answer this question. TALIS asks teachers and schools about their working conditions and the learning…

  10. Energy spectrum of 208Pb(n,x) reactions

    NASA Astrophysics Data System (ADS)

    Tel, E.; Kavun, Y.; Özdoǧan, H.; Kaplan, A.

    2018-02-01

    Fission and fusion reactor technologies have been investigated since 1950's on the world. For reactor technology, fission and fusion reaction investigations are play important role for improve new generation technologies. Especially, neutron reaction studies have an important place in the development of nuclear materials. So neutron effects on materials should study as theoretically and experimentally for improve reactor design. For this reason, Nuclear reaction codes are very useful tools when experimental data are unavailable. For such circumstances scientists created many nuclear reaction codes such as ALICE/ASH, CEM95, PCROSS, TALYS, GEANT, FLUKA. In this study we used ALICE/ASH, PCROSS and CEM95 codes for energy spectrum calculation of outgoing particles from Pb bombardment by neutron. While Weisskopf-Ewing model has been used for the equilibrium process in the calculations, full exciton, hybrid and geometry dependent hybrid nuclear reaction models have been used for the pre-equilibrium process. The calculated results have been discussed and compared with the experimental data taken from EXFOR.

  11. Bruyères-le-Châtel Neutron Evaluations of Actinides with the TALYS Code: The Fission Channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romain, P., E-mail: pascal.romain@cea.fr; Morillon, B.; Duarte, H.

    For several years, various neutron evaluations of plutonium and uranium isotopes have been performed at Bruyères-le-Châtel (BRC), from 1 keV up to 30 MeV. Since only nuclear reaction models have been used to produce these evaluations, our approach was named the “Full Model” approach. Total, shape elastic and direct inelastic cross sections were obtained from the coupled channels model using a dispersive optical potential developed for actinides, with a large enough coupling scheme including the lowest octupolar band. All other cross sections were calculated using the Hauser-Feshbach theory (TALYS code) with a pre-equilibrium component above 8–10 MeV. In this paper,more » we focus our attention on the fission channel. More precisely, we will present the BRC contribution to fission modeling and the philosophy adopted in our “Full Model” approach. Performing evaluations with the “Full Model” approach implies the optimization of a large number of model parameters. With increasing neutron incident energy, many residual nuclei produced by nucleon emission also lead to fission. All available experimental data assigned to various fission mechanisms of the same nucleus were used to determine fission barrier parameters. For uranium isotopes, triple-humped fission barriers were required in order to reproduce accurately variations of the experimental fission cross sections. Our BRC fission modeling has shown that the effects of the class II or class III states located in the wells of the fission barrier sometimes provide an anti-resonant transmission rather than a resonant one. Consistent evaluations were produced for a large series of U and Pu isotopes. Resulting files were tested against integral data.« less

  12. Measurement of 58Fe (p , n)58Co reaction cross-section within the proton energy range of 3.38 to 19.63 MeV

    NASA Astrophysics Data System (ADS)

    Ghosh, Reetuparna; Badwar, Sylvia; Lawriniang, Bioletty; Jyrwa, Betylda; Naik, Haldhara; Naik, Yeshwant; Suryanarayana, Saraswatula Venkata; Ganesan, Srinivasan

    2017-08-01

    The 58Fe (p , n)58Co reaction cross-section within Giant Dipole Resonance (GDR) region i.e. from 3.38 to 19.63 MeV was measured by stacked-foil activation and off-line γ-ray spectrometric technique using the BARC-TIFR Pelletron facility at Mumbai. The present data were compared with the existing literature data and found to be in good agreement. The 58Fe (p , n)58Co reaction cross-section as a function of proton energy was also theoretically calculated by using the computer code TALYS-1.8 and found to be in good agreement, which shows the validity of the TALYS-1.8 program.

  13. Neutron-induced reaction cross-sections of 93Nb with fast neutron based on 9Be(p,n) reaction

    NASA Astrophysics Data System (ADS)

    Naik, H.; Kim, G. N.; Kim, K.; Zaman, M.; Nadeem, M.; Sahid, M.

    2018-02-01

    The cross-sections of the 93Nb (n , 2 n)92mNb, 93Nb (n , 3 n)91mNb and 93Nb (n , 4 n)90Nb reactions with the average neutron energies of 14.4 to 34.0 MeV have been determined by using an activation and off-line γ-ray spectrometric technique. The fast neutrons were produced using the 9Be (p , n) reaction with the proton energies of 25-, 35- and 45-MeV from the MC-50 Cyclotron at the Korea Institute of Radiological and Medical Sciences (KIRAMS). The neutron flux-weighted average cross-sections of the 93Nb(n , xn ; x = 2- 4) reactions were also obtained from the mono-energetic neutron-induced reaction cross-sections of 93Nb calculated using the TALYS 1.8 code, and the neutron flux spectrum based on the MCNPX 2.6.0 code. The present results for the 93Nb(n , xn ; x = 2- 4) reactions are compared with the calculated neutron flux-weighted average values and found to be in good agreement.

  14. Teaching and Learning International Survey (TALIS) 2013: U.S. Technical Report. NCES 2015-010

    ERIC Educational Resources Information Center

    Strizek, Gregory A.; Tourkin, Steve; Erberber, Ebru

    2014-01-01

    This technical report is designed to provide researchers with an overview of the design and implementation of the Teaching and Learning International Survey (TALIS) 2013. This information is meant to supplement that presented in OECD publications by describing those aspects of TALIS 2013 that are unique to the United States. Chapter 2 provides…

  15. Supporting Teacher Professionalism: Insights from TALIS 2013

    ERIC Educational Resources Information Center

    OECD Publishing, 2016

    2016-01-01

    This report examines the nature and extent of support for teacher professionalism using the Teaching and Learning International Survey (TALIS) 2013, a survey of teachers and principals in 34 countries and economies around the world. Teacher professionalism is defined as the knowledge, skills, and practices that teachers must have in order to be…

  16. Excitation function of alpha-particle-induced reactions on natNi from threshold to 44 MeV

    NASA Astrophysics Data System (ADS)

    Uddin, M. S.; Kim, K. S.; Nadeem, M.; Sudár, S.; Kim, G. N.

    2017-05-01

    Excitation functions of the natNi(α,x)62,63,65Zn, natNi(α,x)56,57Ni and natNi(α,x)56,57,58m+gCo reactions were measured from the respective thresholds to 44MeV using the stacked-foil activation technique. The tests for the beam characterization are described. The radioactivity was measured using HPGe γ-ray detectors. Theoretical calculations on α-particles-induced reactions on natNi were performed using the nuclear model code TALYS-1.8. A few results are new, the others strengthen the database. Our experimental data were compared with results of nuclear model calculations and described the reaction mechanism.

  17. Activation cross-section measurement of proton induced reactions on cerium

    NASA Astrophysics Data System (ADS)

    Tárkányi, F.; Hermanne, A.; Ditrói, F.; Takács, S.; Spahn, I.; Spellerberg, S.

    2017-12-01

    In the framework of a systematic study of proton induced nuclear reactions on lanthanides we have measured the excitation functions on natural cerium for the production of 142,139,138m,137Pr, 141,139,137m,137g,135Ce and 133La up to 65 MeV proton energy using the activation method with stacked-foil irradiation technique and high-resolution γ-ray spectrometry. The cross-sections of the investigated reactions were compared with the data retrieved from the TENDL-2014 and TENDL-2015 libraries, based on the latest version of the TALYS code system. No earlier experimental data were found in the literature. The measured cross-section data are important for further improvement of nuclear reaction models and for practical applications in nuclear medicine, other labeling and activation studies.

  18. Measurement of cross-sections for the 93Nb(p,n)93mMo and 93Nb(p,pn)92mNb reactions up to ∼20 MeV energy

    NASA Astrophysics Data System (ADS)

    Lawriniang, B.; Ghosh, R.; Badwar, S.; Vansola, V.; Santhi Sheela, Y.; Suryanarayana, S. V.; Naik, H.; Naik, Y. P.; Jyrwa, B.

    2018-05-01

    Excitation functions of the 93Nb(p,n)93mMo and 93Nb(p,pn)92mNb reactions were measured from threshold energies to ∼ 20MeV by employing stacked foil activation technique in combination with the off-line γ-ray spectroscopy at the BARC-TIFR Pelletron facility, Mumbai. For the 20 MeV proton beam, the energy degradation along the stack was calculated using the computer code SRIM 2013. The proton beam intensity was determined via the natCu(p,x)62Zn monitor reaction. The experimental data obtained were compared with the theoretical results from TALYS-1.8 as well as with the literature data available in EXFOR. It was found that for the 93Nb(p,n)92mMo reaction, the present data are in close agreement with some of the recent literature data and the theoretical values based on TALYS-1.8 but are lower than the other literature data. In the case of 93Nb(p,pn)93mNb reaction, present data agree very well with the literature data and the theoretical values.

  19. Evaluation of excitation functions of proton and deuteron induced reactions on enriched tellurium isotopes with special relevance to the production of iodine-124.

    PubMed

    Aslam, M N; Sudár, S; Hussain, M; Malik, A A; Shah, H A; Qaim, S M

    2010-09-01

    Cross-section data for the production of medically important radionuclide (124)I via five proton and deuteron induced reactions on enriched tellurium isotopes were evaluated. The nuclear model codes, STAPRE, EMPIRE and TALYS, were used for consistency checks of the experimental data. Recommended excitation functions were derived using a well-defined statistical procedure. Therefrom integral yields were calculated. The various production routes of (124)I were compared. Presently the (124)Te(p,n)(124)I reaction is the method of choice; however, the (125)Te(p,2n)(124)I reaction also appears to have great potential.

  20. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talou, P.; Chadwick, M. B.; Dietrich, F. S.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004.more » McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.« less

  1. Reaction path of energetic materials using THOR code

    NASA Astrophysics Data System (ADS)

    Durães, L.; Campos, J.; Portugal, A.

    1998-07-01

    The method of predicting reaction path, using THOR code, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using HL EoS. The code allows the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, calculated and discussed—pure Ammonium Nitrate and its based explosive ANFO, and Nitromethane—because their equivalence ratio is respectively lower, near and greater than the stoicheiometry. Predictions of reaction path are in good correlation with experimental values, proving the validity of proposed method.

  2. Absolute cross sections of the 86Sr(α,n)89Zr reaction at energies of astrophysical interest

    NASA Astrophysics Data System (ADS)

    Oprea, Andreea; Glodariu, Tudor; Filipescu, Dan; Gheorghe, Ioana; Mitu, Andreea; Boromiza, Marian; Bucurescu, Dorel; Costache, Cristian; Cata-Danil, Irina; Florea, Nicoleta; Ghita, Dan Gabriel; Ionescu, Alina; Marginean, Nicolae; Marginean, Raluca; Mihai, Constantin; Mihai, Radu; Negret, Alexandru; Nita, Cristina; Olacel, Adina; Pascu, Sorin; Sotty, Cristophe; Suvaila, Rares; Stan, Lucian; Stroe, Lucian; Serban, Andreea; Stiru, Irina; Toma, Sebastian; Turturica, Andrei; Ujeniuc, Sorin

    2017-09-01

    Absolute cross sections for the 86Sr(α,n)89Zr reaction at energies close to the Gamow window are reported. Three thin SrF2 targets were irradiated using the 9 MV Tandem facility in IFIN-HH Bucharest that delivered α beams for the activation process. Two high-purity Germanium detectors were used to measure the induced activity of 89Zr in a low background environment. The experimental results are in very good agreement with Hauser-Feshbach statistical model calculations performed with the TALYS code.

  3. Photo-neutron reaction cross-sections for natMo in the bremsstrahlung end-point energies of 12-16 and 45-70 MeV

    NASA Astrophysics Data System (ADS)

    Naik, H.; Kim, G. N.; Kapote Noy, R.; Schwengner, R.; Kim, K.; Zaman, M.; Shin, S. G.; Gey, Y.; Massarczyk, R.; John, R.; Junghans, A.; Wagner, A.; Cho, M.-H.

    2016-07-01

    The natMo( γ, xn)90, 91, 99Mo reaction cross-sections were experimentally determined for the bremsstrahlung end-point energies of 12, 14, 16, 45, 50, 55, 60 and 70MeV by activation and off-line γ -ray spectrometric technique and using the 20MeV electron linac (ELBE) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany, and the 100MeV electron linac at the Pohang Accelerator Laboratory (PAL), Pohang, Korea. The natMo( γ, xn)88, 89, 90, 91, 99Mo reaction cross-sections as a function of photon energy were also calculated using the computer code TALYS 1.6. The flux-weighted average cross-sections were obtained from the literature data and the calculated values of TALYS based on mono-energetic photons and are found to be in general agreement with the present results. The flux-weighted average experimental and theoretical cross-sections for the natMo( γ, xn)88, 89, 90, 91, 99Mo reactions increase with the bremsstrahlung end-point energy, which indicates the role of excitation energy. After a certain energy, the individual natMo( γ, xn) reaction cross-sections decrease with the increase of bremsstrahlung energy due to opening of other reactions, which indicates sharing of energy in different reaction channels. The 100Mo( γ, n) reaction cross-section is important for the production of 99Mo , which is a probable alternative to the 98Mo(n, γ) and 235U(n, f ) reactions.

  4. Alpha-induced reactions on selenium between 11 and 15 MeV

    NASA Astrophysics Data System (ADS)

    Fiebiger, Stefan; Slavkovská, Zuzana; Giesen, Ulrich; Göbel, Kathrin; Heftrich, Tanja; Heiske, Annett; Reifarth, René; Schmidt, Stefan; Sonnabend, Kerstin; Thomas, Benedikt; Weigand, Mario

    2017-07-01

    The production of 77,79,85,85m Kr and 77Br via the reaction Se(α ,x) was investigated between {E}α =11 and 15 MeV using the activation technique. The irradiation of natural selenium targets on aluminum backings was conducted at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany. The spectroscopic analysis of the reaction products was performed using a high-purity germanium detector located at PTB and a low energy photon spectrometer detector at the Goethe University Frankfurt, Germany. Thick-target yields were determined. The corresponding energy-dependent production cross sections of 77,79,85,85m Kr and 77Br were calculated from the thick-target yields. Good agreement between experimental data and theoretical predictions using the TALYS-1.6 code was found.

  5. Excitation functions for (d,x) reactions on (133)Cs up to Ed=40MeV.

    PubMed

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Baba, M; Ignatyuk, A V

    2016-04-01

    In the frame of a systematic study of excitation functions of deuteron induced reactions the excitation functions of the (133)Cs(d,x)(133m,133mg,131mg)Ba,(134,)(132)Cs and (12)(9m)Xe nuclear reactions were measured up to 40MeV deuteron energies by using the stacked foil irradiation technique and γ-ray spectroscopy of activated samples. The results were compared with calculations performed with the theoretical nuclear reaction codes ALICE-IPPE-D, EMPIRE II-D and TALYS calculation listed in the TENDL-2014 library. A moderate agreement was obtained. Based on the integral yields deduced from our measured cross sections, production of (131)Cs via the (133)Cs(d,4n)(131)Ba→(131)Cs reaction and (133)Ba via (133)Cs(d,2n) reactions is discussed in comparison with other charged particle production routes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Neutron-induced reactions on AlF3 studied using the optical model

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Lv, Cui-Juan; Zhang, Guo-Qiang; Wang, Hong-Wei; Zuo, Jia-Xu

    2015-08-01

    Neutron-induced reactions on 27Al and 19F nuclei are investigated using the optical model implemented in the TALYS 1.4 toolkit. Incident neutron energies in a wide range from 0.1 keV to 30 MeV are calculated. The cross sections for the main channels (n, np), (n, p), (n, α), (n, 2n), and (n, γ) and the total reaction cross section (n, tot) of the reactions are obtained. When the default parameters in TALYS 1.4 are adopted, the calculated results agree with the measured results. Based on the calculated results for the n + 27Al and n + 19F reactions, the results of the n + 27Al19F reactions are predicted. These results are useful both for the design of thorium-based molten salt reactors and for neutron activation analysis techniques.

  7. Morphology of sustentaculum tali: Biomechanical importance and correlation with angular dimensions of the talus.

    PubMed

    Mahato, Niladri Kumar

    2011-12-01

    The talus and the calcaneus share the bulk of load transmitted from the leg to the skeleton of the foot. The present study analyses the inter-relationship between the superior articular surface and the angular dimensions of the talus with the morphology of the sustentaculum tali. Identification of possible relationships between different angular parameters of the talus morphology and the sustentaculum tali in context of load transmission through the foot. One articular surface and three angular parameters at the junction of the head and the body were measured from dried human talar bones. Corresponding calcaneal samples were measured for four dimensions at the sustentaculum tali. Correlation and regression statistical values between parameters were worked out and analysed. Several parameters within the talus demonstrated significant correlations amongst themselves. The neck vertical angle showed a strong correlation with the articulating surface area below the head of the talus. The inter-relationship between articular and angular parameters within the talus demonstrates strong correlation for certain parameters. Data presented in the study may be helpful to adjust calcaneal and talar screw placement techniques, prosthesis designing and bio-mechanical studies at this important region. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Study of activation cross sections of deuteron induced reactions on barium. Production of 131Cs, 133Ba

    NASA Astrophysics Data System (ADS)

    Tárkányi, F.; Hermanne, A.; Ditrói, F.; Takács, S.; Szücs, Z.; Brezovcsik, K.

    2018-01-01

    In the frame of a systematic study of deuteron induced activation processes on middle mass elements, excitation functions of the natBa(d,x) 135,133,132La, 135m,133m,133mg,131mgBa, 136mg,134mg,132,129Cs reactions were measured up to 50 MeV for the first time. Cross sections were measured with the activation method using a stacked foil irradiation technique followed by HPGe γ-ray spectrometry. A comparison with the results of the nuclear model TALYS code (reported in the TENDL-2015 library) was done. The potential use of the deuteron induced reactions on Ba for applications (131Cs and 131Ba production) is discussed.

  9. Study of activation cross-sections of deuteron induced reactions on rhodium up to 40 MeV

    NASA Astrophysics Data System (ADS)

    Ditrói, F.; Tárkányi, F.; Takács, S.; Hermanne, A.; Yamazaki, H.; Baba, M.; Mohammadi, A.; Ignatyuk, A. V.

    2011-09-01

    In the frame of a systematic study of the activation cross-sections of deuteron induced nuclear reactions, excitation functions of the 103Rh(d,x) 100,101,103Pd, 100g,101m,101g,102m,102gRh and 103gRu reactions were determined up to 40 MeV. Cross-sections were measured with the activation method using a stacked foil irradiation technique. Excitation functions of the contributing reactions were calculated using the ALICE-IPPE, EMPIRE-II and TALYS codes. From the measured cross-section data integral production yields were calculated and compared with experimental integral yield data reported in the literature. From the measured cross-sections and previous data, activation curves were deduced to support thin layer activation (TLA) on rhodium and Rh containing alloys.

  10. Proton and deuteron induced reactions on natGa: Experimental and calculated excitation functions

    NASA Astrophysics Data System (ADS)

    Hermanne, A.; Adam-Rebeles, R.; Tárkányi, F.; Takács, S.; Ditrói, F.

    2015-09-01

    Cross-sections for reactions on natGa, induced by protons (up to 65 MeV) and deuterons (up to 50 MeV), producing γ-emitting radionuclides with half-lives longer than 1 h were measured in a stacked-foil irradiation using thin Ga-Ni alloy (70-30%) targets electroplated on Cu or Au backings. Excitation functions for generation of 68,69Ge, 66,67,68,72Ga and 65,69mZn on natGa are discussed, relative to the monitor reactions natAl(d,x)24,22Na, natAl(p,x)24,22Na, natCu(p,x)62Zn and natNi(p,x)57Ni. The results are compared to our earlier measurements, the scarce literature values and to the results of the code TALYS 1.6 (online database TENDL-2014).

  11. Measurement of excitation functions in alpha induced reactions on natCu

    NASA Astrophysics Data System (ADS)

    Shahid, Muhammad; Kim, Kwangsoo; Kim, Guinyun; Zaman, Muhammad; Nadeem, Muhammad

    2015-09-01

    The excitation functions of 66,67,68Ga, 62,63,65Zn, 61,64Cu, and 58,60Co radionuclides in the natCu(α, x) reaction were measured in the energy range from 15 to 42 MeV by using a stacked-foil activation method at the MC-50 cyclotron of the Korean Institute of Radiological and Medical Sciences. The measured results were compared with the literature data as well as the theoretical values obtained from the TENDL-2013 and TENDL-2014 libraries based on the TALYS-1.6 code. The integral yields for thick targets of the produced radionuclides were also determined from the measured excitation functions and the stopping power of natural copper.

  12. Activation cross-sections of proton induced reactions on natHf in the 38-65 MeV energy range: Production of 172Lu and of 169Yb

    NASA Astrophysics Data System (ADS)

    Tárkányi, F.; Hermanne, A.; Ditrói, F.; Takács, S.; Ignatyuk, A. V.

    2018-07-01

    In the frame of a systematical study of light ion induced nuclear reactions on hafnium, activation cross sections for proton induced reactions were investigated. Excitation functions were measured in the 38-65 MeV energy range for the natHf(p,xn)180g,177,176,175,173Ta, natHf(p,x)180m,179m,175,173,172,171Hf, 177g,173,172,171,170,169Lu and natHf(p,x)169Yb reactions by using the activation method, combining stacked foil irradiation and off line gamma ray spectroscopy. The experimental results are compared with earlier results in the overlapping energy range, and with the theoretical predictions of the ALICE IPPE and EMPIRE theoretical codes and of the TALYS code reported in the TENDL-2015 and TENDL-2017 libraries. The production routes of 172Lu (and its parent 172Hf) and of 169Yb are reviewed.

  13. Tungsten fragmentation in nuclear reactions induced by high-energy cosmic-ray protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chechenin, N. G., E-mail: chechenin@sinp.msu.ru; Chuvilskaya, T. V.; Shirokova, A. A.

    2015-01-15

    Tungsten fragmentation arising in nuclear reactions induced by cosmic-ray protons in space-vehicle electronics is considered. In modern technologies of integrated circuits featuring a three-dimensional layered architecture, tungsten is frequently used as a material for interlayer conducting connections. Within the preequilibrium model, tungsten-fragmentation features, including the cross sections for the elastic and inelastic scattering of protons of energy between 30 and 240 MeV; the yields of isotopes and isobars; their energy, charge, and mass distributions; and recoil energy spectra, are calculated on the basis of the TALYS and EMPIRE-II-19 codes. It is shown that tungsten fragmentation affects substantially forecasts of failuresmore » of space-vehicle electronics.« less

  14. Activation cross section and isomeric cross section ratios for the (n ,2 n ) reaction on 153Eu

    NASA Astrophysics Data System (ADS)

    Luo, Junhua; Jiang, Li; Li, Suyuan

    2017-10-01

    The 153Eu(n ,2 n ) m1,m2,g152Eu cross section was measured by means of the activation technique at three neutron energies in the range 13-15 MeV. The quasimonoenergetic neutron beam was formed via the 3H(d ,n ) 4He reaction, in the Pd-300 Neutron Generator at the Chinese Academy of Engineering Physics (CAEP). The activities induced in the reaction products were measured using high-resolution γ-ray spectroscopy. The cross section of the population of the second high-spin (8-) isomeric state was measured along with the reaction cross section populating both the ground (3-) and the first isomeric state (0-). Cross sections were also evaluated theoretically using the numerical code TALYS-1.8, with different level density options at neutron energies varying from the reaction threshold to 20 MeV. Results are discussed and compared with the corresponding literature.

  15. TANGO1 and Mia2/cTAGE5 (TALI) cooperate to export bulky pre-chylomicrons/VLDLs from the endoplasmic reticulum.

    PubMed

    Santos, António J M; Nogueira, Cristina; Ortega-Bellido, Maria; Malhotra, Vivek

    2016-05-09

    Procollagens, pre-chylomicrons, and pre-very low-density lipoproteins (pre-VLDLs) are too big to fit into conventional COPII-coated vesicles, so how are these bulky cargoes exported from the endoplasmic reticulum (ER)? We have shown that TANGO1 located at the ER exit site is necessary for procollagen export. We report a role for TANGO1 and TANGO1-like (TALI), a chimeric protein resulting from fusion of MIA2 and cTAGE5 gene products, in the export of pre-chylomicrons and pre-VLDLs from the ER. TANGO1 binds TALI, and both interact with apolipoprotein B (ApoB) and are necessary for the recruitment of ApoB-containing lipid particles to ER exit sites for their subsequent export. Although export of ApoB requires the function of both TANGO1 and TALI, the export of procollagen XII by the same cells requires only TANGO1. These findings reveal a general role for TANGO1 in the export of bulky cargoes from the ER and identify a specific requirement for TALI in assisting TANGO1 to export bulky lipid particles. © 2016 Santos et al.

  16. Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions

    NASA Astrophysics Data System (ADS)

    Kwak, Kyujin; Yang, Seungwon

    2015-08-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.

  17. Description of Differential Cross Sections for 63Cu + p Nuclear Reactions Induced by High-Energy Cosmic-Ray Protons

    NASA Astrophysics Data System (ADS)

    Chuvilskaya, T. V.; Shirokova, A. A.

    2018-03-01

    The results of calculation of 63Cu + p differential cross sections at incident-proton energies between 10 and 200 MeV and a comparative analysis of these results are presented as a continuation of the earlier work of our group on developing methods for calculating the contribution of nuclear reactions to radiative effects arising in the onboard spacecraft electronics under the action of high-energy cosmic-ray protons on 63Cu nuclei (generation of single-event upsets) and as a supplement to the earlier calculations performed on the basis of the TALYS code in order to determine elastic- and inelastic-scattering cross sections and charge, mass, and energy distributions of recoil nuclei (heavy products of the 63Cu + p nuclear reaction). The influence of various mechanisms of the angular distributions of particles emitted in the 63Cu + p nuclear reaction is also discussed.

  18. Transfer reaction code with nonlocal interactions

    DOE PAGES

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-07-14

    We present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d,N)(d,N) or (N,d)(N,d), including nonlocal nucleon–target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are differential angular distributions for the cross sections of A(d,N)BA(d,N)B or B(N,d)AB(N,d)A. Details on the implementation of the TT-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided.more » This code is suitable to be applied for deuteron induced reactions in the range of View the MathML sourceEd=10–70MeV, and provides cross sections with 4% accuracy.« less

  19. Determination of the cross section for (n,p) and (n,α) reactions on (165)Ho at 13.5 and 14.8MeV.

    PubMed

    Luo, Junhua; An, Li; Jiang, Li; He, Long

    2015-04-01

    Activation cross-sections for the (165)Ho(n,p)(165)Dy and (165)Ho(n,α)(162)Tb reactions were measured by means of the activation method at 13.5 and 14.8MeV, to resolve inconsistencies in existing data. A neutron beam produced via the (3)H(d,n)(4)He reaction was used. Statistical model calculations were performed using the nuclear-reaction codes EMPIRE-3.2 Malta and TALYS-1.6 with default parameters, at neutron energies varying from the reaction threshold to 20MeV. Results are also discussed and compared with some corresponding values found in the literature. The calculational results on the (165)Ho(n,α)(162)Tb reaction agreed fairly well with experimental data, but there were large discrepancies in the results for the (165)Ho(n,p)(165)Dy reaction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Reaction path of energetic materials using THOR code

    NASA Astrophysics Data System (ADS)

    Duraes, L.; Campos, J.; Portugal, A.

    1997-07-01

    The method of predicting reaction path, using a thermochemical computer code, named THOR, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using a thermal equation of state (EoS). The used HL EoS is a new EoS developed in previous works. HL EoS is supported by a Boltzmann EoS, taking α =13.5 to the exponent of the intermolecular potential and θ=1.4 to the adimensional temperature. This code allows now the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, described, calculated and discussed - Ammonium Nitrate based explosives and Nitromethane - because they are very known explosives and their equivalence ratio is respectively near and greater than the stoicheiometry. Predictions of detonation properties of other condensed explosives, as a function of energy release, present results in good correlation with experimental values.

  1. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A., E-mail: apalumbo@bnl.gov; Herman, M.; Capote, R.

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  2. Extension of the energy range of the experimental activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium up to 65MeV.

    PubMed

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-04-01

    Activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium were extended up to 65MeV by using stacked foil irradiation and gamma spectrometry experimental methods. Experimental cross-sections data for the formation of the radionuclides (159)Dy, (157)Dy, (155)Dy, (161)Tb, (160)Tb, (156)Tb, (155)Tb, (154m2)Tb, (154m1)Tb, (154g)Tb, (153)Tb, (152)Tb and (151)Tb are reported in the 36-65MeV energy range, and compared with an old dataset from 1964. The experimental data were also compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS nuclear reaction model code as listed in the latest on-line libraries TENDL 2013. Copyright © 2015. Published by Elsevier Ltd.

  3. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Capote, R.; Carlson, B.V.

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6

  4. Cyclotron production of 48V via natTi(d,x)48V nuclear reaction; a promising radionuclide

    NASA Astrophysics Data System (ADS)

    Usman, A. R.; Khandaker, M. U.; Haba, H.

    2017-06-01

    In this experimental work, we studied the excitation function of natTi(d,x)48V nuclear reactions from 24 MeV down to threshold energy. Natural titanium foils were arranged in the popular stacked-foil method and activated with deuteron beam generated from an AVF cyclotron at RIKEN, Wako, Japan. The emitted γ activities from the activated foils were measured using an offline γ-ray spectrometry. The present results were analyzed, compared with earlier published experimental data and also with the evaluated data of Talys code. Our new measured data agree with some of the earlier reported experimental data while a partial agreement is found with the evaluated theoretical data. In addition to the use of 48V as a beam intensity monitor, recent studies indicate its potentials as calibrating source in PET cameras and also as a (radioactive) label for medical applications. The results are also expected to further enrich the experimental database and also to play an important role in nuclear reactions model codes design.

  5. Cross sections of deuteron induced reactions on (nat)Sm for production of the therapeutic radionuclide ¹⁴⁵Sm and ¹⁵³Sm.

    PubMed

    Tárkányi, F; Hermanne, A; Takács, S; Ditrói, F; Csikai, J; Ignatyuk, A V

    2014-09-01

    At present, targeted radiotherapy (TR) is acknowledged to have great potential in oncology. A large list of interesting radionuclides is identified, including several radioisotopes of lanthanides, amongst them (145)Sm and (153)Sm. In this work the possibility of their production at a cyclotron was investigated using a deuteron beam and a samarium target. The excitation functions of the (nat)Sm(d,x)(145,153)Sm reactions were determined for deuteron energies up to 50 MeV using the stacked-foil technique and high-resolution γ-ray spectrometry. The measured cross sections and the contributing reactions were analyzed by comparison with results of the ALICE, EMPIRE and TALYS nuclear reaction codes. A short overview and comparison of possible production routes is given. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Activation cross-sections of proton induced reactions on vanadium in the 37-65 MeV energy range

    NASA Astrophysics Data System (ADS)

    Ditrói, F.; Tárkányi, F.; Takács, S.; Hermanne, A.

    2016-08-01

    Experimental excitation functions for proton induced reactions on natural vanadium in the 37-65 MeV energy range were measured with the activation method using a stacked foil irradiation technique. By using high resolution gamma spectrometry cross-section data for the production of 51,48Cr, 48V, 48,47,46,44m,44g,43Sc and 43,42K were determined. Comparisons with the earlier published data are presented and results predicted by different theoretical codes (EMPIRE and TALYS) are included. Thick target yields were calculated from a fit to our experimental excitation curves and compared with the earlier experimental yield data. Depth distribution curves to be used for thin layer activation (TLA) are also presented.

  7. 197Au(n ,2 n ) reaction cross section in the 15-21 MeV energy range

    NASA Astrophysics Data System (ADS)

    Kalamara, A.; Vlastou, R.; Kokkoris, M.; Nicolis, N. G.; Patronis, N.; Serris, M.; Michalopoulou, V.; Stamatopoulos, A.; Lagoyannis, A.; Harissopulos, S.

    2018-03-01

    The cross section of the 197Au(n ,2 n )196Au reaction has been determined at six energies ranging from 15.3-20.9 MeV by means of the activation technique, relative to the 27Al(n ,α )24Na reaction. Quasimonoenergetic neutron beams were produced via the 3H(d ,n )4He reaction at the 5.5 MV Tandem T11/25 accelerator laboratory of NCSR "Demokritos". After the irradiations, the induced γ -ray activity of the target and reference foils was measured with high-resolution HPGe detectors. The cross section for the high spin isomeric state (12-) was determined along with the sum of the ground (2-), the first (5+), and second (12-) isomeric states. Theoretical calculations were carried out with the codes empire 3.2.2 and talys 1.8. Optimum input parameters were chosen in such a way as to simultaneously reproduce several experimental reaction channel cross sections in a satisfactory way, namely the (n ,elastic ), (n ,2 n ), (n ,3 n ), (n ,p ), (n ,α ), and (n ,total) ones.

  8. Cross-section measurement for the 67Zn(n, α)64Ni reaction at 6.0 MeV

    NASA Astrophysics Data System (ADS)

    Zhang, Guohui; Wu, Hao; Zhang, Jiaguo; Liu, Jiaming; Chen, Jinxiang; Gledenov, Yu. M.; Sedysheva, M. V.; Khuukhenkhuu, G.; Szalanski, P. J.

    2010-01-01

    Up to now, no experimental cross-section data exist for the 67Zn ( n, α) 64Ni reaction in the MeV neutron energy region. In the present work, the cross-section of the 67Zn ( n, α) 64Ni reaction was measured at E n = 6.0 MeV. Experiments were performed at the Van de Graaff accelerator of Peking University, China. Fast neutrons were produced through the D ( d, n) 3He reaction using a deuterium gas target. Absolute neutron flux was determined by a small 238U fission chamber and a BF3 long counter was used as a neutron flux monitor. A twin gridded ionization chamber was employed as the α -particle detector and two back-to-back 67Zn samples were used for α events measurement. Background was measured and subtracted from foreground. The measured cross-section of the 67Zn ( n, α) 64Ni reaction was 7.3 (1±15%) mb at 6.0MeV. The present result was compared with existing evaluations and TALYS code calculations.

  9. Sir John Pople, Gaussian Code, and Complex Chemical Reactions

    Science.gov Websites

    tool that describes the dance of molecules in chemical reactions ... . Dr. Pople was among the first to colors of light they will absorb or emit, and the pace of chemical reactions. The work culminated in a dropdown arrow Site Map A-Z Index Menu Synopsis Sir John Pople, Gaussian Code, and Complex Chemical

  10. Measurement of 235U(n,n'γ) and 235U(n,2nγ) reaction cross sections

    NASA Astrophysics Data System (ADS)

    Kerveno, M.; Thiry, J. C.; Bacquias, A.; Borcea, C.; Dessagne, P.; Drohé, J. C.; Goriely, S.; Hilaire, S.; Jericha, E.; Karam, H.; Negret, A.; Pavlik, A.; Plompen, A. J. M.; Romain, P.; Rouki, C.; Rudolf, G.; Stanoiu, M.

    2013-02-01

    The design of generation IV nuclear reactors and the studies of new fuel cycles require knowledge of the cross sections of various nuclear reactions. Our research is focused on (n,xnγ) reactions occurring in these new reactors. The aim is to measure unknown cross sections and to reduce the uncertainty on present data for reactions and isotopes of interest for transmutation or advanced reactors. The present work studies the 235U(n,n'γ) and 235U(n,2nγ) reactions in the fast neutron energy domain (up to 20 MeV). The experiments were performed with the Geel electron linear accelerator GELINA, which delivers a pulsed white neutron beam. The time characteristics enable measuring neutron energies with the time-of-flight (TOF) technique. The neutron induced reactions [in this case inelastic scattering and (n,2n) reactions] are identified by on-line prompt γ spectroscopy with an experimental setup including four high-purity germanium (HPGe) detectors. A fission ionization chamber is used to monitor the incident neutron flux. The experimental setup and analysis methods are presented and the model calculations performed with the TALYS-1.2 code are discussed.

  11. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    NASA Astrophysics Data System (ADS)

    Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-01

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  12. Measurement of activation cross-section of long-lived products in deuteron induced nuclear reactions on palladium in the 30-50MeV energy range.

    PubMed

    Ditrói, F; Tárkányi, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2017-10-01

    Excitation functions were measured in the 31-49.2MeV energy range for the nat Pd(d,xn) 111,110m,106m,105,104g,103 Ag, nat Pd(d,x) 111m,109,101,100 Pd, nat Pd(d,x), 105,102m,102g,101m,101g,100,99m,99g Rh and nat Pd(d,x) 103,97 Ru nuclear reactions by using the stacked foil irradiation technique. The experimental results are compared with our previous results and with the theoretical predictions calculated with the ALICE-D, EMPIRE-D and TALYS (TENDL libraries) codes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Evaluation of nuclear reaction cross section data for the production of (87)Y and (88)Y via proton, deuteron and alpha-particle induced transmutations.

    PubMed

    Zaneb, H; Hussain, M; Amjad, N; Qaim, S M

    2016-06-01

    Proton, deuteron and alpha-particle induced reactions on (87,88)Sr, (nat)Zr and (85)Rb targets were evaluated for the production of (87,88)Y. The literature data were compared with nuclear model calculations using the codes ALICE-IPPE, TALYS 1.6 and EMPIRE 3.2. The evaluated cross sections were generated; therefrom thick target yields of (87,88)Y were calculated. Analysis of radio-yttrium impurities and yield showed that the (87)Sr(p, n)(87)Y and (88)Sr(p, n)(88)Y reactions are the best routes for the production of (87)Y and (88)Y respectively. The calculated yield for the (87)Sr(p, n)(87)Y reaction is 104 MBq/μAh in the energy range of 14→2.7MeV. Similarly, the calculated yield for the (88)Sr(p, n)(88)Y reaction is 3.2 MBq/μAh in the energy range of 15→7MeV. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Visualized kinematics code for two-body nuclear reactions

    NASA Astrophysics Data System (ADS)

    Lee, E. J.; Chae, K. Y.

    2016-05-01

    The one or few nucleon transfer reaction has been a great tool for investigating the single-particle properties of a nucleus. Both stable and exotic beams are utilized to study transfer reactions in normal and inverse kinematics, respectively. Because many energy levels of the heavy recoil from the two-body nuclear reaction can be populated by using a single beam energy, identifying each populated state, which is not often trivial owing to high level-density of the nucleus, is essential. For identification of the energy levels, a visualized kinematics code called VISKIN has been developed by utilizing the Java programming language. The development procedure, usage, and application of the VISKIN is reported.

  15. Capture and photonuclear reaction rates involving charged-particles: Impacts of nuclear ingredients and future measurement on ELI-NP

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Goriely, S.; Balabanski, D. L.; Chesnevskaya, S.; Guardo, G. L.; La Cognata, M.; Lan, H. Y.; Lattuada, D.; Luo, W.; Matei, C.

    2018-05-01

    The astrophysical p-process is an important way of nucleosynthesis to produce the stable and proton-rich nuclei beyond Fe which can not be reached by the s- and r-processes. In the present study, the impact of nuclear ingredients, especially the nuclear potential, level density and strength function, to the astrophysical re-action rates of (p,γ), (α,γ), (γ,p), and (γ,α) reactions are systematically studied. The calculations are performed basad on the modern reaction code TALYS for about 3000 stable and proton-rich nuclei with 12≤Z≤110. In particular, both of the Wood-Saxon potential and the microscopic folding potential are taken into account. It is found that both the capture and photonuclear reaction rates are very sensitive to the nuclear potential, thus the better determination of nuclear potential would be important to reduce the uncertainties of reaction rates. Meanwhile, the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) facility is being developed, which will provide the great opportunity to experimentally study the photonuclear reactions in p-process. Simulations of the experimental setup for the measurements of the photonuclear reactions 96Ru(γ,p) and 96Ru(γ,α) are performed. It is shown that the experiments of photonuclear reactions in p-process based on ELI-NP are quite promising.

  16. Measurement of formation cross-section of 99Mo from the 98Mo(n,γ) and 100Mo(n,2n) reactions.

    PubMed

    Badwar, Sylvia; Ghosh, Reetuparna; Lawriniang, Bioletty M; Vansola, Vibha; Sheela, Y S; Naik, Haladhara; Naik, Yeshwant; Suryanarayana, Saraswatula V; Jyrwa, Betylda; Ganesan, Srinivasan

    2017-11-01

    The formation cross-section of medical isotope 99 Mo from the 98 Mo(n,γ) reaction at the neutron energy of 0.025eV and from the 100 Mo(n,2n) reaction at the neutron energies of 11.9 and 15.75MeV have been determined by using activation and off-line γ-ray spectrometric technique. The thermal neutron energy of 0.025eV was used from the reactor critical facility at BARC, Mumbai, whereas the average neutron energies of 11.9 and 15.75MeV were generated using 7 Li(p,n) reaction in the Pelletron facility at TIFR, Mumbai. The experimentally determined cross-sections were compared with the evaluated nuclear data libraries of ENDF/B-VII.1, CENDL-3.1, JENDL-4.0 and JEFF-3.2 and are found to be in close agreement. The 100 Mo(n,2n) 99 Mo reaction cross-sections were also calculated theoretically by using TALYS-1.8 and EMPIRE-3.2 computer codes and compared with the experimental data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Photonuclear reactions in astrophysical p-process: Theoretical calculations and experiment simulation based on ELI-NP

    NASA Astrophysics Data System (ADS)

    Xu, Yi; Luo, Wen; Balabanski, Dimiter; Goriely, Stephane; Matei, Catalin; Tesileanu, Ovidiu

    2017-09-01

    The astrophysical p-process is an important way of nucleosynthesis to produce the stable and proton-rich nuclei beyond Fe which can not be reached by the s- and r-processes. In the present study, the astrophysical reaction rates of (γ,n), (γ,p), and (γ,α) reactions are computed within the modern reaction code TALYS for about 3000 stable and proton-rich nuclei with 12 < Z < 110. The nuclear structure ingredients involved in the calculation are determined from experimental data whenever available and, if not, from global microscopic nuclear models. In particular, both of the Wood-Saxon potential and the double folding potential with density dependent M3Y (DDM3Y) effective interaction are used for the calculations. It is found that the photonuclear reaction rates are very sensitive to the nuclear potential, and the better determination of nuclear potential would be important to reduce the uncertainties of reaction rates. Meanwhile, the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) facility is being developed, which will provide the great opportunity to experimentally study the photonuclear reactions in p-process. Simulations of the experimental setup for the measurements of the photonuclear reactions 96Ru(γ,p) and 96Ru(γ,α) are performed. It is shown that the experiments of photonuclear reactions in p-process based on ELI-NP are quite promising.

  18. Extension of activation cross-section data of deuteron induced nuclear reactions on cadmium up to 50 MeV

    NASA Astrophysics Data System (ADS)

    Hermanne, A.; Tárkányi, F.; Takács, S.; Ditrói, F.

    2016-10-01

    The excitation functions for 109,110g,111m+g,113m,114m,115mIn, 107,109,115m,115gCd and 105g,106m,110g,111Ag are presented for stacked foil irradiations on natCd targets in the 49-33 MeV deuteron energy domain. Reduced uncertainty is obtained by determining incident particle flux and energy scale relative to re-measured monitor reactions natAl(d,x)22,24Na. The results were compared to our earlier studies on natCd and on enriched 112Cd targets. The merit of the values predicted by the TALYS 1.6 code (resulting from a weighted combination of reaction cross-section data on all stable Cd isotopes as available in the on-line libraries TENDL-2014 and TENDL-2015) is discussed. Influence on optimal production routes for several radionuclides with practical applications (111In, 114mIn, 115Cd, 109,107Cd….) is reviewed.

  19. Reliability and Structure of the TALIS Social Desirability Scale: An Assessment Based on Item Response Theory

    ERIC Educational Resources Information Center

    Kapuza, A. V.; Tyumeneva, Yu. A.

    2017-01-01

    One of the ways of controlling for the influence of social expectations on the answers given by survey respondents is to use a social desirability scale together with the main questions. The social desirability scale, which was included in the Teaching and Learning International Survey (TALIS) international comparative study for this purpose, was…

  20. Moving Towards a State of the Art Charge-Exchange Reaction Code

    NASA Astrophysics Data System (ADS)

    Poxon-Pearson, Terri; Nunes, Filomena; Potel, Gregory

    2017-09-01

    Charge-exchange reactions have a wide range of applications, including late stellar evolution, constraining the matrix elements for neutrinoless double β-decay, and exploring symmetry energy and other aspects of exotic nuclear matter. Still, much of the reaction theory needed to describe these transitions is underdeveloped and relies on assumptions and simplifications that are often extended outside of their region of validity. In this work, we have begun to move towards a state of the art charge-exchange reaction code. As a first step, we focus on Fermi transitions using a Lane potential in a few body, Distorted Wave Born Approximation (DWBA) framework. We have focused on maintaining a modular structure for the code so we can later incorporate complications such as nonlocality, breakup, and microscopic inputs. Results using this new charge-exchange code will be shown compared to the analysis in for the case of 48Ca(p,n)48Sc. This work was supported in part by the National Nuclear Security Administration under the Stewardship Science Academic Alliances program through the U.S. DOE Cooperative Agreement No. DE- FG52-08NA2855.

  1. Experimentally constrained ( p , γ ) Y 89 and ( n , γ ) Y 89 reaction rates relevant to p -process nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, A. C.; Guttormsen, M.; Schwengner, R.

    The nuclear level density and the g-ray strength function have been extracted for 89Y, using the Oslo Method on 89Y(p,p'γ) 89Y coincidence data. The g-ray strength function displays a low-energy enhancement consistent with previous observations in this mass region ( 93-98Mo). Shell-model calculations give support that the observed enhancement is due to strong, low-energy M1 transitions at high excitation energies. The data were further used as input for calculations of the 88Sr(p,γ) 89Y and 88Y(n,γ) 89Y cross sections with the TALYS reaction code. Lastly, comparison with cross-section data, where available, as well as with values from the BRUSLIB library, showsmore » a satisfying agreement.« less

  2. Integral cross section measurement of the U 235 ( n , n ' ) U 235 m reaction in a pulsed reactor

    DOE PAGES

    Bélier, G.; Bond, E. M.; Vieira, D. J.; ...

    2015-04-08

    The integral measurement of the neutron inelastic cross section leading to the 26-minute half-life 235mU isomer in a fission-like neutron spectrum is presented. The experiment has been performed at a pulsed reactor, where the internal conversion decay of the isomer was measured using a dedicated electron detector after activation. The sample preparation, efficiency measurement, irradiation, radiochemistry purification, and isomer decay measurement will be presented. We determined the integral cross section for the ²³⁵U(n,n') 235mU reaction to be 1.00±0.13b. This result supports an evaluation performed with TALYS-1.4 code with respect to the isomer excitation as well as the total neutron inelasticmore » scattering cross section.« less

  3. Experimentally constrained ( p , γ ) Y 89 and ( n , γ ) Y 89 reaction rates relevant to p -process nucleosynthesis

    DOE PAGES

    Larsen, A. C.; Guttormsen, M.; Schwengner, R.; ...

    2016-04-21

    The nuclear level density and the g-ray strength function have been extracted for 89Y, using the Oslo Method on 89Y(p,p'γ) 89Y coincidence data. The g-ray strength function displays a low-energy enhancement consistent with previous observations in this mass region ( 93-98Mo). Shell-model calculations give support that the observed enhancement is due to strong, low-energy M1 transitions at high excitation energies. The data were further used as input for calculations of the 88Sr(p,γ) 89Y and 88Y(n,γ) 89Y cross sections with the TALYS reaction code. Lastly, comparison with cross-section data, where available, as well as with values from the BRUSLIB library, showsmore » a satisfying agreement.« less

  4. What Did We Learn about Our Teachers and Principals? Results of the TALIS-2013 International Comparative Study

    ERIC Educational Resources Information Center

    Pinskaya, M. A.; Lenskaya, E. A.; Ponomareva, A. A.; Brun, I. V.; Kosaretsky, S. G.; Savelyeva, M. B.

    2016-01-01

    The Teaching and Learning International Survey (TALIS) is a large-scale and authoritative international study of teachers. It is conducted by the Organization for Economic Cooperation and Development (OECD) to collect and compare information about teachers and principals in different countries in such key areas as the training and professional…

  5. Between Ritual and Spiritual: Teachers' Perceptions and Practices Regarding Prayer Education in TALI Day Schools in Israel

    ERIC Educational Resources Information Center

    Muszkat-Barkan, Michal

    2015-01-01

    The aim of this qualitative study is to describe teachers' perceptions and roles in prayer education in TALI day schools in Israel, using in-depth oral Interviews, written questionnaires and written materials of the schools' network. Two educational ideologies were identified: Belonging to the Jewish collective and Personal-spiritual ideology.…

  6. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  7. Extension of the energy range of experimental activation cross-sections data of deuteron induced nuclear reactions on indium up to 50MeV.

    PubMed

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-11-01

    The energy range of our earlier measured activation cross-sections data of longer-lived products of deuteron induced nuclear reactions on indium were extended from 40MeV up to 50MeV. The traditional stacked foil irradiation technique and non-destructive gamma spectrometry were used. No experimental data were found in literature for this higher energy range. Experimental cross-sections for the formation of the radionuclides (113,110)Sn, (116m,115m,114m,113m,111,110g,109)In and (115)Cd are reported in the 37-50MeV energy range, for production of (110)Sn and (110g,109)In these are the first measurements ever. The experimental data were compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS 1.6 nuclear model code as listed in the on-line library TENDL-2014. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Calculated differential and double differential cross section of DT neutron induced reactions on natural chromium (Cr)

    NASA Astrophysics Data System (ADS)

    Rajput, Mayank; Vala, Sudhirsinh; Srinivasan, R.; Abhangi, M.; Subhash, P. V.; Pandey, B.; Rao, C. V. S.; Bora, D.

    2018-01-01

    Chromium is an important alloying element of stainless steel (SS) and SS is the main constituent of structural material proposed for fusion reactors. Energy and double differential cross section data will be required to estimate nuclear responses in the materials used in fusion reactors. There are no experimental data of energy and double differential cross section, available for neutron induced reactions on natural chromium at 14 MeV neutron energy. In this study, energy and double differential cross section data of (n,p) and (n,α) reactions for all the stable isotopes of chromium have been estimated, using appropriate nuclear models in TALYS code. The cross section data of stable isotopes are later converted into the energy and double differential cross section data of natural Cr using the isotopic abundance. The contribution from compound, pre-equilibrium and direct nuclear reaction to total reaction have also been calculated for 52,50Cr(n,p) and 52Cr(n,α). The calculation of energy differential cross section shows that most of emitted protons and alpha particles are of 3 and 8 MeV respectively. The calculated data is compared with the data from EXFOR data library and is found to be in good agreement.

  9. Systematic study of proton capture reactions in medium-mass nuclei relevant to the p process: The case of 103Rh and In,115113

    NASA Astrophysics Data System (ADS)

    Harissopulos, S.; Spyrou, A.; Foteinou, V.; Axiotis, M.; Provatas, G.; Demetriou, P.

    2016-02-01

    The cross sections of the 103Rh(p ,γ )104Pd and the In,115113(p ,γ )Sn,116114 reactions have been determined from γ angular distribution measurements carried out at beam energies from 2 to 3.5 MeV. An array of four highly efficient HPGe detectors all shielded with BGO crystals for Compton background suppression was used. Astrophysical S factors and reaction rates were deduced from the measured cross sections. Statistical model calculations were performed using the Hauser-Feshbach (HF) code TALYS and were compared with the new data. A good agreement between theory and experiment was found. In addition, the effect of different combinations of the nuclear input parameters entering the HF calculations on the ground-state reaction rates was investigated. It was found that these rates differ by a factor 3 at the most, being thus within the average discrepancies observed between calculated p -nuclei abundances and observations, if certain combinations of optical model potentials, nuclear level densities, and γ -ray strength functions are used.

  10. Cross sections of the 144Sm(n,α)141Nd and 66Zn(n,α)63Ni reactions at 4.0, 5.0 and 6.0 MeV

    NASA Astrophysics Data System (ADS)

    Yury, Gledenov; Guohui, Zhang; Khuukhenkhuu, Gonchigdorj; Milana, Sedysheva; Lubos, Krupa; Sansarbayar, Enkhbold; Igor, Chuprakov; Zhimin, Wang; Xiao, Fan; Luyu, Zhang; Huaiyong, Bai

    2017-09-01

    Cross sections of the 144Sm(n,α)141Nd and 66Zn(n,α)63Ni reactions were measured at En = 4.0, 5.0 and 6.0 MeV performed at the 4.5-MV Van de Graaff Accelerator of Peking University, China. A double-section gridded ionization chamber was used to detect the alpha particles. The foil samples of 144Sm2O3 and enriched 66Zn were placed at the common cathode plate of the chamber. Monoenergetic neutrons were produced by a deuterium gas target through the 2H(d,n)3He reaction. The neutron flux was monitored by a BF3 long counter. Cross sections of the 238U(n,f) reaction were used as the standard to perform the (n,α) reaction measurement. Present results are compared with existing measurements and evaluations. They are generally in agreement with TALYS-1.6 code calculations. For the 144Sm(n,α)141Nd reaction our measurements support the data of JEF-2.2. For the 66Zn(n,α)63Ni reaction present results support the data of EAF-2010 and TENDL-2015 data.

  11. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  12. SurfKin: an ab initio kinetic code for modeling surface reactions.

    PubMed

    Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K

    2014-10-05

    In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts. Copyright © 2014 Wiley Periodicals, Inc.

  13. Measurement of the 115In(n,γ)116 m In reaction cross-section at the neutron energies of 1.12, 2.12, 3.12 and 4.12 MeV

    NASA Astrophysics Data System (ADS)

    Lawriniang, Bioletty Mary; Badwar, Sylvia; Ghosh, Reetuparna; Jyrwa, Betylda; Vansola, Vibha; Naik, Haladhara; Goswami, Ashok; Naik, Yeshwant; Datrik, Chandra Shekhar; Gupta, Amit Kumar; Singh, Vijay Pal; Pol, Sudir Shibaji; Subramanyam, Nagaraju Balabenkata; Agarwal, Arun; Singh, Pitambar

    2015-08-01

    The 115In(n,γ)116 m In reaction cross section at neutron energies of 1.12, 2.12, 3.12 and 4.12 MeV was determined by using an activation and off-line γ-ray spectrometric technique. The monoenergetic neutron energies of 1.12 - 4.12 MeV were generated from the 7Li(p,n) reaction by using proton beam with energies of 3 and 4 MeV from the folded tandem ion beam accelerator (FOTIA) at Bhabha Atomic Research Centre (BARC) and with energies of 5 and 6 MeV from the Pelletron facility at Tata Institute of Fundamental Research (TIFR), Mumbai. The 197Au(n,γ)198Au reaction cross-section was used as the neutron flux monitor.The 115In(n,γ)116 m In reaction cross section at neutron energies of 1.12, 2.12, 3.12 and 4.12 MeV was determined by using an activation and off-line γ-ray spectrometric technique. The monoenergetic neutron energies of 1.12 - 4.12 MeV were generated from the 7Li(p,n) reaction by using proton beam with energies of 3 and 4 MeV from the folded tandem ion beam accelerator (FOTIA) at Bhabha Atomic Research Centre (BARC) and with energies of 5 and 6 MeV from the Pelletron facility at Tata Institute of Fundamental Research (TIFR), Mumbai. The 197Au(n,γ)198 Au reaction cross-section was used as the neutron flux monitor. The 115In(n,γ)116 m In reaction cross-sections at neutron energies of 1.12 - 4.12 MeV were compared with the literature data and were found to be in good agreement with one set of data, but not with others. The 115In(n,γ)116 m In cross-section was also calculated theoretically by using the computer code TALYS 1.6 and was found to be slightly lower than the experimental data from the present work and the literature.)198Au reaction cross-section was used as the neutron flux monitor. The 115In(n,γ)116 m In reaction cross-sections at neutron energies of 1.12 - 4.12 MeV were compared with the literature data and were found to be in good agreement with one set of data, but not with others. The 115In(n,γ)116 m In cross-section was also calculated

  14. Analysis of reaction cross-section production in neutron induced fission reactions on uranium isotope using computer code COMPLET.

    PubMed

    Asres, Yihunie Hibstie; Mathuthu, Manny; Birhane, Marelgn Derso

    2018-04-22

    This study provides current evidence about cross-section production processes in the theoretical and experimental results of neutron induced reaction of uranium isotope on projectile energy range of 1-100 MeV in order to improve the reliability of nuclear stimulation. In such fission reactions of 235 U within nuclear reactors, much amount of energy would be released as a product that able to satisfy the needs of energy to the world wide without polluting processes as compared to other sources. The main objective of this work is to transform a related knowledge in the neutron-induced fission reactions on 235 U through describing, analyzing and interpreting the theoretical results of the cross sections obtained from computer code COMPLET by comparing with the experimental data obtained from EXFOR. The cross section value of 235 U(n,2n) 234 U, 235 U(n,3n) 233 U, 235 U(n,γ) 236 U, 235 U(n,f) are obtained using computer code COMPLET and the corresponding experimental values were browsed by EXFOR, IAEA. The theoretical results are compared with the experimental data taken from EXFOR Data Bank. Computer code COMPLET has been used for the analysis with the same set of input parameters and the graphs were plotted by the help of spreadsheet & Origin-8 software. The quantification of uncertainties stemming from both experimental data and computer code calculation plays a significant role in the final evaluated results. The calculated results for total cross sections were compared with the experimental data taken from EXFOR in the literature, and good agreement was found between the experimental and theoretical data. This comparison of the calculated data was analyzed and interpreted with tabulation and graphical descriptions, and the results were briefly discussed within the text of this research work. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  16. Investigation deuteron-induced reactions on cobalt

    NASA Astrophysics Data System (ADS)

    Ditrói, F.; Tárkányi, F.; Takács, S.; Hermanne, A.; Baba, M.; Ignatyuk, A. V.

    2010-09-01

    The excitation functions of deuteron-induced reactions were measured on metallic cobalt. Beyond the 56,57,58,60Co cobalt isotopes, we also identified 57Ni, 54Mn, 56Mn and 59Fe in the deuteron experiments. For the above radionuclides, the excitation functions in the measured energy range were determined and compared with the data found in the literature and with the results of model calculations (ALICE-IPPE, EMPIRE-D, EAF, and TALYS (TENDL)). The excitation functions agree with previous measurements; furthermore, we calculated the yield and thin layer activation (TLA) curves that are necessary for practical and industrial applications.

  17. Isomeric yield ratios of 87m,gY from different nuclear reactions

    NASA Astrophysics Data System (ADS)

    Naik, H.; Kim, G. N.; Kim, K.; Zaman, M.; Sahid, M.; Yang, S.-C.; Lee, M. W.; Kang, Y. R.; Shin, S. G.; Cho, M.-H.; Goswami, A.; Song, T. Y.

    2014-07-01

    The independent isomeric yield ratios of 87m,gY produced from the 93Nb( γ, α2n) and natZr( γ, p xn) reactions with the end-point bremsstrahlung energy of 45-70 MeV have been determined by an off-line γ-ray spectrometric technique using 100 MeV electron linac at the Pohang accelerator laboratory, Korea. The isomeric yield ratios of 87m,gY were also determined from the natZr(p, αxn) and the 89Y(p,p2n) reactions with E P = 15-45 MeV as well as those from the 89Y( α, α2n) reaction with E α = 32-43 MeV using the MC-50 cyclotron at the Korea Institute of Radiological and Medical Science, Korea. The isomeric yield ratios of 87m,gY from the present work in the 93Nb( γ, α2n), natZr( γ, p xn), natZr(p, αxn), 89Y(p,p2n), and 89Y( α, α2n) reactions were compared with those of the literature data in the 85Rb( α, 2n), the 86,87,88Sr(d, xn), 89Y(n,3n), and the 89Y( γ, 2n) reactions to examine the role of target, projectiles, and ejectiles through compound nucleus excitation energy and input angular momentum. The isomeric yield ratios of 87m,gY in the above eleven reactions were also calculated using the computer code TALYS 1.4 and compared with the experimental data. The different behaviors between photon- and neutron-induced reactions and charged-particle-induced reactions are discussed from the viewpoint of compound and non-compound (pre-equilibrium) process.

  18. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  19. Cross section measurements for neutron inelastic scattering and the ( n ,   2 n γ ) reaction on Pb 206

    DOE PAGES

    Negret, A.; Mihailescu, L. C.; Borcea, C.; ...

    2015-06-30

    We measured excitation functions for γ production associated with the neutron inelastic scattering and the (n, 2n) reactions on 206Pb from threshold up to 18 MeV for about 40 transitions. Two independent measurements were performed using different samples and acquisition systems to check consistency of the results. Moreover, the neutron flux was determined with a 235U fission chamber and a procedure that were validated against a fluence standard. For incident energy higher than the threshold for the first excited level and up to 3.5 MeV, estimates are provided for the total inelastic and level cross sections by combining the presentmore » γ production cross sections with the level and decay data of 206Pb reported in the literature. The uncertainty common to all incident energies is 3.0% allowing overall uncertainties from 3.3% to 30% depending on transition and neutron energy. Finally, the present data agree well with earlier work, but significantly expand the experimental database while comparisons with model calculations using the talys reaction code show good agreement over the full energy range.« less

  20. Measurement of excitation functions and analysis of isomeric population in some reactions induced by proton on natural indium at low energy

    NASA Astrophysics Data System (ADS)

    Muhammed Shan, P. T.; Musthafa, M. M.; Najmunnisa, T.; Mohamed Aslam, P.; Rajesh, K. K.; Hajara, K.; Surendran, P.; Nair, J. P.; Shanbagh, Anil; Ghugre, S.

    2018-06-01

    The excitation functions for reaction residues populated via 115In(p , p) 115 mIn, 115In(p , pn) 114 mIn, 115In(p , p 2 n) 113 mIn, 113In(p , p) 113 mIn, 115In(p , nα) 111 mCd, 115In(p , 3 n) 113Sn and 113In(p , n) 113Sn channels were measured over the proton energy range of 8-22 MeV using stacked foil activation technique. Theoretical analysis of the data were performed within the framework of two statistical model codes EMPIRE-3.2 and TALYS-1.8. Isomeric cross section ratio for isomeric pairs m,g 115In, m,g 114In, m,g 113In, 113Sn m,g and m,g 111Cd were determined for the first time. The dependence of isomeric cross section ratio on various factors are analysed.

  1. Investigation of proton induced reactions on niobium at low and medium energies

    NASA Astrophysics Data System (ADS)

    Ditrói, F.; Hermanne, A.; Corniani, E.; Takács, S.; Tárkányi, F.; Csikai, J.; Shubin, Yu. N.

    2009-10-01

    Niobium is a metal with important technological applications: use as alloying element to increase strength of super alloys, as thin layer for tribological applications, as superconductive material, in high temperature engineering systems, etc. In the frame of a systematic study of activation cross-sections of charged particle induced reactions on structural materials proton induced excitation functions on Nb targets were determined with the aim of applications in accelerator and reactor technology and for thin layer activation (TLA). The charged particle activation cross-sections on this element are also important for yield calculation of medical isotope production ( 88,89Zr, 86,87,88Y) and for dose estimation in PET targetry. As niobium is a monoisotopic element it is an ideal target material to test nuclear reaction theories. We present here the experimental excitation functions of 93Nb(p,x) 90,93mMo, 92m,91m,90Nb, 88,89Zr and 88Y in the energy range 0-37 MeV. The results were compared with the theoretical cross-sections calculated by means of the code ALICE-IPPE, EMPIRE-3, TALYS and with the literature data. The theory reproduces the shape of the measured results well and magnitude is also acceptable. Thick target yields calculated from our fitted cross-section give reliable estimations for production of medically relevant radioisotopes and for dose estimation in accelerator technology.

  2. High resolution measurements of the {sup 241}Am(n,2n) reaction cross section

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sage, C.; European Commission, Joint Research Centre, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel; Commissariat a L'Energie Atomique Cadarache, DEN/CAD/DER/SPRC/LEPh, F-13108 St Paul-lez-Durance

    Measurements of the {sup 241}Am(n,2n) reaction cross section have been performed at the Joint Research Centre (JRC) Geel in the frame of a collaboration between the European Commission (EC) JRC and French laboratories from CNRS and the Commissariat a L'Energie Atomique (CEA) Cadarache. Raw material coming from the Atalante facility of CEA Marcoule has been transformed by JRC Karlsruhe into suitable {sup 241}AmO{sub 2} samples embedded in Al{sub 2}O{sub 3} matrices specifically designed for these measurements. The irradiations were carried out at the 7-MV Van de Graaff accelerator. The {sup 241}Am(n,2n) reaction cross section was determined relative to the {supmore » 27}Al(n,alpha){sup 24}Na standard cross section. The measurements were performed in four sessions, using quasi-mono-energetic neutrons with energies ranging from 8 to 21 MeV produced via the {sup 2}H(d,n){sup 3}He and the {sup 3}H(d,n){sup 4}He reactions. The induced activity was measured by standard gamma-ray spectrometry using a high-purity germanium detector. Below 15 MeV, the present results are in agreement with data obtained earlier. Above 15 MeV, these measurements allowed the experimental investigation of the {sup 241}Am(n,2n) reaction cross section for the first time. The present data are in good agreement with predictions obtained with the talys code that uses an optical and fission model developed at CEA.« less

  3. (232)Th(d,4n)(230)Pa cross-section measurements at ARRONAX facility for the production of (230)U.

    PubMed

    Duchemin, C; Guertin, A; Haddad, F; Michel, N; Métivier, V

    2014-05-01

    (226)Th (T1/2=31 min) is a promising therapeutic radionuclide since results, published in 2009, showed that it induces leukemia cells death and activates apoptosis pathways with higher efficiencies than (213)Bi. (226)Th can be obtained via the (230)U α decay. This study focuses on the (230)U production using the (232)Th(d,4n)(230)Pa(β-)(230)U reaction. Experimental cross sections for deuteron-induced reactions on (232)Th were measured from 30 down to 19 MeV using the stacked-foil technique with beams provided by the ARRONAX cyclotron. After irradiation, all foils (targets as well as monitors) were measured using a high-purity germanium detector. Our new (230)Pa cross-section values, as well as those of (232)Pa and (233)Pa contaminants created during the irradiation, were compared with previous measurements and with results given by the TALYS code. Experimentally, same trends were observed with slight differences in orders of magnitude mainly due to the nuclear data change. Improvements are ongoing about the TALYS code to better reproduce the data for deuteron-induced reactions on (232)Th. Using our cross-section data points from the (232)Th(d,4n)(230)Pa reaction, we have calculated the thick-target yield of (230)U, in Bq/μA·h. This value allows now to a full comparison between the different production routes, showing that the proton routes must be preferred. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Alpha particle induced reactions on natCr up to 39 MeV: Experimental cross-sections, comparison with theoretical calculations and thick target yields for medically relevant 52gFe production

    NASA Astrophysics Data System (ADS)

    Hermanne, A.; Adam Rebeles, R.; Tárkányi, F.; Takács, S.

    2015-08-01

    Thin natCr targets were obtained by electroplating, using 23.75 μm Cu foils as backings. In five stacked foil irradiations, followed by high resolution gamma spectroscopy, the cross sections for production of 52gFe, 49,51cumCr, 52cum,54,56cumMn and 48cumV in Cr and 61Cu,68Ga in Cu were measured up to 39 MeV incident α-particle energy. Reduced uncertainty is obtained by simultaneous remeasurement of the natCu(α,x)67,66Ga monitor reactions over the whole energy range. Comparisons with the scarce literature values and results from the TENDL-2013 on-line library, based on the theoretical code family TALYS-1.6, were made. A discussion of the production routes for 52gFe with achievable yields and contamination rates was made.

  5. A Comparative Study on the Teaching Profession in Turkey and South Korea: Secondary Analysis of TALIS 2008 Data in Relation to Teacher Self-Efficacy

    ERIC Educational Resources Information Center

    Aslan, Berna

    2015-01-01

    Problem Statement: Teacher self-efficacy is important factor for school and student success. This study investigates the variables that explain teacher self-efficacy in Turkey and South Korea according to TALIS 2008 data. A detailed comparison was conducted and the state of the teaching profession in both countries is discussed. Purpose of the…

  6. Measurement of the 2H(7Be, 6Li)3He reaction rate and its contribution to the primordial lithium abundance

    NASA Astrophysics Data System (ADS)

    Li, Er-Tao; Li, Zhi-Hong; Yan, Sheng-Quan; Su, Jun; Guo, Bing; Li, Yun-Ju; Wang, You-Bao; Lian, Gang; Zeng, Sheng; Chen, Si-Zhe; Ma, Shao-Bo; Li, Xiang-Qing; He, Cao; Sun, Hui-Bin; Liu, Wei-Ping

    2018-04-01

    In the standard Big Bang nucleosynthesis (SBBN) model, the lithium puzzle has attracted intense interest over the past few decades, but still has not been solved. Conventionally, the approach is to include more reactions flowing into or out of lithium, and study the potential effects of those reactions which were not previously considered. 7Be(d, 3He)6Li is a reaction that not only produces 6Li but also destroys 7Be, which decays to 7Li, thereby affecting 7Li indirectly. Therefore, this reaction could alleviate the lithium discrepancy if its reaction rate is sufficiently high. However, there is not much information available about the 7Be(d, 3He)6Li reaction rate. In this work, the angular distributions of the 7Be(d, 3He)6Li reaction are measured at the center of mass energies E cm = 4.0 MeV and 6.7 MeV with secondary 7Be beams for the first time. The excitation function of the 7Be(d, 3He)6Li reaction is first calculated with the computer code TALYS and then normalized to the experimental data, then its reaction rate is deduced. A SBBN network calculation is performed to investigate its influence on the 6Li and 7Li abundances. The results show that the 7Be(d, 3He)6Li reaction has a minimal effect on 6Li and 7Li because of its small reaction rate. Therefore, the 7Be(d, 3He)6Li reaction is ruled out by this experiment as a means of alleviating the lithium discrepancy. Supported by National Natural Science Foundation of China (11375269, 11505117, 11490560, 11475264, 11321064), Natural Science Foundation of Guangdong Province (2015A030310012), 973 program of China (2013CB834406) and National key Research and Development Province (2016YFA0400502)

  7. Cross section measurement of alpha particle induced nuclear reactions on natural cadmium up to 52MeV.

    PubMed

    Ditrói, F; Takács, S; Haba, H; Komori, Y; Aikawa, M

    2016-12-01

    Cross sections of alpha particle induced nuclear reactions have been measured on thin natural cadmium targets foils in the energy range from 11 to 51.2MeV. This work was a part of our systematic study on excitation functions of light ion induced nuclear reactions on different target materials. Regarding the cross sections, the alpha induced reactions are not deeply enough investigated. Some of the produced isotopes are of medical interest, others have application in research and industry. The radioisotope 117m Sn is a very important theranostic (therapeutic + diagnostic) radioisotope, so special care was taken to the results for that isotope. The well-established stacked foil technique followed by gamma-spectrometry with HPGe gamma spectrometers were used. The target and monitor foils in the stack were commercial high purity metal foils. From the irradiated targets 117m Sn, 113 Sn, 110 Sn, 117m,g In, 116m In, 115m In, 114m In, 113m In, 111 In, 110m,g In, 109m In, 108m,g In, 115g Cd and 111m Cd were identified and their excitation functions were derived. The results were compared with the data of the previous measurements from the literature and with the results of the theoretical nuclear reaction model code calculations TALYS 1.8 (TENDL-2015) and EMPIRE 3.2 (Malta). From the cross section curves thick target yields were calculated and compared with the available literature data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Clinical coding of prospectively identified paediatric adverse drug reactions--a retrospective review of patient records.

    PubMed

    Bellis, Jennifer R; Kirkham, Jamie J; Nunn, Anthony J; Pirmohamed, Munir

    2014-12-17

    National Health Service (NHS) hospitals in the UK use a system of coding for patient episodes. The coding system used is the International Classification of Disease (ICD-10). There are ICD-10 codes which may be associated with adverse drug reactions (ADRs) and there is a possibility of using these codes for ADR surveillance. This study aimed to determine whether ADRs prospectively identified in children admitted to a paediatric hospital were coded appropriately using ICD-10. The electronic admission abstract for each patient with at least one ADR was reviewed. A record was made of whether the ADR(s) had been coded using ICD-10. Of 241 ADRs, 76 (31.5%) were coded using at least one ICD-10 ADR code. Of the oncology ADRs, 70/115 (61%) were coded using an ICD-10 ADR code compared with 6/126 (4.8%) non-oncology ADRs (difference in proportions 56%, 95% CI 46.2% to 65.8%; p < 0.001). The majority of ADRs detected in a prospective study at a paediatric centre would not have been identified if the study had relied on ICD-10 codes as a single means of detection. Data derived from administrative healthcare databases are not reliable for identifying ADRs by themselves, but may complement other methods of detection.

  9. Coding the Assembly of Polyoxotungstates with a Programmable Reaction System.

    PubMed

    Ruiz de la Oliva, Andreu; Sans, Victor; Miras, Haralampos N; Long, De-Liang; Cronin, Leroy

    2017-05-01

    Chemical transformations are normally conducted in batch or flow mode, thereby allowing the chemistry to be temporally or spatially controlled, but these approaches are not normally combined dynamically. However, the investigation of the underlying chemistry masked by the self-assembly processes that often occur in one-pot reactions and exploitation of the potential of complex chemical systems requires control in both time and space. Additionally, maintaining the intermediate constituents of a self-assembled system "off equilibrium" and utilizing them dynamically at specific time intervals provide access to building blocks that cannot coexist under one-pot conditions and ultimately to the formation of new clusters. Herein, we implement the concept of a programmable networked reaction system, allowing us to connect discrete "one-pot" reactions that produce the building block{W 11 O 38 } ≡ {W 11 } under different conditions and control, in real time, the assembly of a series of polyoxometalate clusters {W 12 O 42 } ≡ {W 12 }, {W 22 O 74 } ≡ {W 22 } 1a, {W 34 O 116 } ≡ {W 34 } 2a, and {W 36 O 120 } ≡ {W 36 } 3a, using pH and ultraviolet-visible monitoring. The programmable networked reaction system reveals that is possible to assemble a range of different clusters using {W 11 }-based building blocks, demonstrating the relationship between the clusters within the family of iso-polyoxotungstates, with the final structural motif being entirely dependent on the building block libraries generated in each separate reaction space within the network. In total, this approach led to the isolation of five distinct inorganic clusters using a "fixed" set of reagents and using a fully automated sequence code, rather than five entirely different reaction protocols. As such, this approach allows us to discover, record, and implement complex one-pot reaction syntheses in a more general way, increasing the yield and reproducibility and potentially giving access to

  10. Coding the Assembly of Polyoxotungstates with a Programmable Reaction System

    PubMed Central

    2017-01-01

    Chemical transformations are normally conducted in batch or flow mode, thereby allowing the chemistry to be temporally or spatially controlled, but these approaches are not normally combined dynamically. However, the investigation of the underlying chemistry masked by the self-assembly processes that often occur in one-pot reactions and exploitation of the potential of complex chemical systems requires control in both time and space. Additionally, maintaining the intermediate constituents of a self-assembled system “off equilibrium” and utilizing them dynamically at specific time intervals provide access to building blocks that cannot coexist under one-pot conditions and ultimately to the formation of new clusters. Herein, we implement the concept of a programmable networked reaction system, allowing us to connect discrete “one-pot” reactions that produce the building block{W11O38} ≡ {W11} under different conditions and control, in real time, the assembly of a series of polyoxometalate clusters {W12O42} ≡ {W12}, {W22O74} ≡ {W22} 1a, {W34O116} ≡ {W34} 2a, and {W36O120} ≡ {W36} 3a, using pH and ultraviolet–visible monitoring. The programmable networked reaction system reveals that is possible to assemble a range of different clusters using {W11}-based building blocks, demonstrating the relationship between the clusters within the family of iso-polyoxotungstates, with the final structural motif being entirely dependent on the building block libraries generated in each separate reaction space within the network. In total, this approach led to the isolation of five distinct inorganic clusters using a “fixed” set of reagents and using a fully automated sequence code, rather than five entirely different reaction protocols. As such, this approach allows us to discover, record, and implement complex one-pot reaction syntheses in a more general way, increasing the yield and reproducibility and potentially giving access to nonspecialists. PMID:28414229

  11. Cross sections of the {sup 67}Zn(n,{alpha}){sup 64}Ni reaction at 4.0, 5.0, and 6.0 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Guohui; Liu Jiaming; Wu Hao

    2010-11-15

    Experimental cross section data of the {sup 67}Zn(n,{alpha}){sup 64}Ni reaction are very scanty because the residual nucleus {sup 64}Ni is stable and the commonly used activation method is not feasible. As a result, very large deviations (about 10 times) exist among different nuclear data libraries. In the present work, cross sections of the partial {sup 67}Zn(n,{alpha}{sub 0}){sup 64}Ni and total {sup 67}Zn(n,{alpha}){sup 64}Ni reactions are measured at neutron energies of 4.0 and 5.0 MeV for the first time, and those of 6.0 MeV are remeasured for consistency checking. A twin-gridded ionization chamber was used as the charged-particle detector and twomore » enriched back-to-back-set {sup 67}Zn samples were adopted. Experiments were performed at the 4.5 MV Van de Graaff Accelerator of Peking University. Neutrons were produced through the {sup 2}H(d,n){sup 3}He reaction using a deuterium gas target. Absolute neutron flux was determined by counting the fission fragments from a {sup 238}U sample placed inside the gridded ionization chamber while a BF{sub 3} long counter was employed as neutron flux monitor. Present data are compared with results of previous measurements, evaluations, and talys code calculations.« less

  12. A comparison of total reaction cross section models used in particle and heavy ion transport codes

    NASA Astrophysics Data System (ADS)

    Sihver, Lembit; Lantz, M.; Takechi, M.; Kohama, A.; Ferrari, A.; Cerutti, F.; Sato, T.

    To be able to calculate the nucleon-nucleus and nucleus-nucleus total reaction cross sections with precision is very important for studies of basic nuclear properties, e.g. nuclear structure. This is also of importance for particle and heavy ion transport calculations because, in all particle and heavy ion transport codes, the probability function that a projectile particle will collide within a certain distance x in the matter depends on the total reaction cross sections. Furthermore, the total reaction cross sections will also scale the calculated partial fragmentation cross sections. It is therefore crucial that accurate total reaction cross section models are used in the transport calculations. In this paper, different models for calculating nucleon-nucleus and nucleus-nucleus total reaction cross sections are compared and discussed.

  13. Investigation of activation cross section data of alpha particle induced nuclear reaction on molybdenum up to 40 MeV: Review of production routes of medically relevant 97,103Ru

    NASA Astrophysics Data System (ADS)

    Tárkányi, F.; Hermanne, A.; Ditrói, F.; Takács, S.; Ignatyuk, A.

    2017-05-01

    The main goals of this investigations were to expand and consolidate reliable activation cross-section data for the natMo(α,x) reactions in connection with production of medically relevant 97,103Ru and the use of the natMo(α,x)97Ru reaction for monitoring beam parameters. The excitation functions for formation of the gamma-emitting radionuclides 94Ru, 95Ru, 97Ru, 103Ru, 93mTc, 93gTc(m+), 94mTc, 94gTc, 95mTc, 95gTc, 96gTc(m+), 99mTc, 93mMo, 99Mo(cum), 90Nb(m+) and 88Zr were measured up to 40 MeV alpha-particle energy by using the stacked foil technique and activation method. Data of our earlier similar experiments were re-evaluated and resulted in corrections on the reported results. Our experimental data were compared with critically analyzed literature data and with the results of model calculations, obtained by using the ALICE-IPPE, EMPIRE 3.1 (Rivoli) and TALYS codes (TENDL-2011 and TENDL-2015 on-line libraries). Nuclear data for different production routes of 97Ru and 103Ru are compiled and reviewed.

  14. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  15. Cross sections of proton-induced nuclear reactions on bismuth and lead up to 100 MeV

    NASA Astrophysics Data System (ADS)

    Mokhtari Oranj, L.; Jung, N. S.; Bakhtiari, M.; Lee, A.; Lee, H. S.

    2017-04-01

    Production cross sections of 209Bi(p , x n )207,206,205,204,203Po, 209Bi(p , pxn) 207,206,205,204,203,202Bi, and natPb(p , x n ) 206,205,204,203,202,201Bi reactions were measured to fill the gap in the excitation functions up to 100 MeV as well as to figure out the effects of different nuclear properties on proton-induced reactions including heavy nuclei. The targets were arranged in two different stacks consisting of Bi, Pb, Al, Au foils and Pb plates. The proton beam intensity was determined by the activation analysis method using 27Al(p ,3 p n )24Na, 197Au(p ,p n )196Au, and 197Au(p , p 3 n )194Au monitor reactions in parallel as well as the Gafchromic film dosimetry method. The activities of produced radionuclei in the foils were measured by the HPGe spectroscopy system. Over 40 new cross sections were measured in the investigated energy range. A satisfactory agreement was observed between the present experimental data and the previously published data. Excitation functions of mentioned reactions were calculated by using the theoretical model based on the latest version of the TALYS code and compared to the new data as well as with other data in the literature. Additionally, the effects of various combinations of the nuclear input parameters of different level density models, optical model potentials, and γ-ray strength functions were considered. It was concluded that if certain level density models are used, the calculated cross sections could be comparable to the measured data. Furthermore, the effects of optical model potential and γ-ray strength functions were considerably lower than that of nuclear level densities.

  16. Measurement of activation cross sections of alpha particle induced reactions on iridium up to an energy of 50 MeV.

    PubMed

    Takács, S; Ditrói, F; Szűcs, Z; Aikawa, M; Haba, H; Komori, Y; Saito, M

    2018-06-01

    Cross sections of alpha particle induced nuclear reactions on iridium were investigated using a 51.2-MeV alpha particle beam. The standard stacked-foil target technique and the activation method were applied. The activity of the reaction products was assessed without chemical separation using high resolution gamma-ray spectrometry. Excitation functions for production of gold, platinum and iridium isotopes ( 196m2 Au, 196m,g Au, 195m,g Au, 194 Au, 193 m,g Au, 192 Au, 191m,g Au, 191 Pt, 195m Pt, 194g Ir, 194m Ir, 192g Ir, 190g Ir and 189 Ir) were determined and compared with available earlier measured experimental data and results of theoretical calculations using TALYS code system. Cross section data were reported for the first time for the nat Ir(α,x) 196m2 Au, nat Ir(α,x) 196m,g Au, nat Ir(α,x) 191 Pt, nat Ir(α,x) 195m Pt, nat Ir(α,x) 194g Ir, nat Ir(α,x) 194m Ir, nat Ir(α,x) 190g Ir and nat Ir(α,x) 189 Ir processes. A possible production route for 195m Pt, the potentially important radionuclide in nuclear medicine, is discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Code C# for chaos analysis of relativistic many-body systems with reactions

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Jipa, Al.; Stan, E.; Esanu, T.; Felea, D.; Bordeianu, C. C.

    2012-04-01

    In this work we present a reaction module for “Chaos Many-Body Engine” (Grossu et al., 2010 [1]). Following our goal of creating a customizable, object oriented code library, the list of all possible reactions, including the corresponding properties (particle types, probability, cross section, particle lifetime, etc.), could be supplied as parameter, using a specific XML input file. Inspired by the Poincaré section, we propose also the “Clusterization Map”, as a new intuitive analysis method of many-body systems. For exemplification, we implemented a numerical toy-model for nuclear relativistic collisions at 4.5 A GeV/c (the SKM200 Collaboration). An encouraging agreement with experimental data was obtained for momentum, energy, rapidity, and angular π distributions. Catalogue identifier: AEGH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 184 628 No. of bytes in distributed program, including test data, etc.: 7 905 425 Distribution format: tar.gz Programming language: Visual C#.NET 2005 Computer: PC Operating system: Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread. One processor used for each many-body system. RAM: 128 Megabytes Classification: 6.2, 6.5 Catalogue identifier of previous version: AEGH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1464 External routines: Net Framework 2.0 Library Does the new version supersede the previous version?: Yes Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems with reactions. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems with reactions

  18. Thorium-232 fission induced by light charged particles up to 70 MeV

    NASA Astrophysics Data System (ADS)

    Métivier, Vincent; Duchemin, Charlotte; Guertin, Arnaud; Michel, Nathalie; Haddad, Férid

    2017-09-01

    Studies have been devoted to the production of alpha emitters for medical application in collaboration with the GIP ARRONAX that possesses a high energy and high intensity multi-particle cyclotron. The productions of Ra-223, Ac-225 and U-230 have been investigated from the Th-232(p,x) and Th-232(d,x) reactions using the stacked-foils method and gamma spectrometry measurements. These reactions have led to the production of several fission products, including some with a medical interest like Mo-99, Cd-115g and I-131. This article presents cross section data of fission products obtained from these undedicated experiments. These data have been also compared with the TALYS code results.

  19. Experimental cross-sections for proton induced nuclear reactions on mercury up to 65 MeV

    NASA Astrophysics Data System (ADS)

    Hermanne, A.; Tárkányi, F.; Takács, S.; Ditrói, F.; Szücs, Z.; Brezovcsik, K.

    2016-07-01

    Cross-sections for formation of activation products induced by protons on natural mercury targets were measured. Results for 196m,196g,197g(cum), 198m,198g,199g(cum), 200g(cum), 201,202Tl, 194g(cum), 195g(cum), 196g(cum), 198m,199g(cum) Au and 195m,197m,203Hg are presented up to 65 MeV incident particle energy, many of these for the first time. The experimental data are compared with literature values and with the predictions of the TALYS 1.6 code (results taken from TENDL-2015 on-line library), thick target yields were derived and possible applications in biomedical sciences are discussed.

  20. Simulation of prompt gamma-ray emission during proton radiotherapy.

    PubMed

    Verburg, Joost M; Shih, Helen A; Seco, Joao

    2012-09-07

    The measurement of prompt gamma rays emitted from proton-induced nuclear reactions has been proposed as a method to verify in vivo the range of a clinical proton radiotherapy beam. A good understanding of the prompt gamma-ray emission during proton therapy is key to develop a clinically feasible technique, as it can facilitate accurate simulations and uncertainty analysis of gamma detector designs. Also, the gamma production cross-sections may be incorporated as prior knowledge in the reconstruction of the proton range from the measurements. In this work, we performed simulations of proton-induced nuclear reactions with the main elements of human tissue, carbon-12, oxygen-16 and nitrogen-14, using the nuclear reaction models of the GEANT4 and MCNP6 Monte Carlo codes and the dedicated nuclear reaction codes TALYS and EMPIRE. For each code, we made an effort to optimize the input parameters and model selection. The results of the models were compared to available experimental data of discrete gamma line cross-sections. Overall, the dedicated nuclear reaction codes reproduced the experimental data more consistently, while the Monte Carlo codes showed larger discrepancies for a number of gamma lines. The model differences lead to a variation of the total gamma production near the end of the proton range by a factor of about 2. These results indicate a need for additional theoretical and experimental study of proton-induced gamma emission in human tissue.

  1. On microscopic theory of radiative nuclear reaction characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamerdzhiev, S. P.; Achakovskiy, O. I., E-mail: oachakovskiy@ippe.ru; Avdeenkov, A. V.

    2016-07-15

    A survey of some results in the modern microscopic theory of properties of nuclear reactions with gamma rays is given. First of all, we discuss the impact of Phonon Coupling (PC) on the Photon Strength Function (PSF) because it represents the most natural physical source of additional strength found for Sn isotopes in recent experiments that could not be explained within the standard HFB + QRPA approach. The self-consistent version of the Extended Theory of Finite Fermi Systems in the Quasiparticle Time Blocking Approximation is applied. It uses the HFB mean field and includes both the QRPA and PC effectsmore » on the basis of the SLy4 Skyrme force. With our microscopic E1 PSFs, the following properties have been calculated for many stable and unstable even–even semi-magic Sn and Ni isotopes as well as for double-magic {sup 132}Sn and {sup 208}Pb using the reaction codes EMPIRE and TALYS with several Nuclear Level Density (NLD) models: (1) the neutron capture cross sections; (2) the corresponding neutron capture gamma spectra; (3) the average radiative widths of neutron resonances. In all the properties considered, the PC contribution turned out to be significant, as compared with the standard QRPA one, and necessary to explain the available experimental data. The results with the phenomenological so-called generalized superfluid NLD model turned out to be worse, on the whole, than those obtained with the microscopic HFB + combinatorial NLD model. The very topical question about the M1 resonance contribution to PSFs is also discussed.Finally, we also discuss the modern microscopic NLD models based on the self-consistent HFB method and show their relevance to explain the experimental data as compared with the phenomenological models. The use of these self-consistent microscopic approaches is of particular relevance for nuclear astrophysics, but also for the study of double-magic nuclei.« less

  2. On the Green's function of the partially diffusion-controlled reversible ABCD reaction for radiation chemistry codes

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2015-09-01

    Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.

  3. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  4. 2D and 3D assessment of sustentaculum tali screw fixation with or without Screw Targeting Clamp.

    PubMed

    De Boer, A Siebe; Van Lieshout, Esther M M; Vellekoop, Leonie; Knops, Simon P; Kleinrensink, Gert-Jan; Verhofstad, Michael H J

    2017-12-01

    Precise placement of sustentaculum tali screw(s) is essential for restoring anatomy and biomechanical stability of the calcaneus. This can be challenging due to the small target area and presence of neurovascular structures on the medial side. The aim was to evaluate the precision of positioning of the subchondral posterior facet screw and processus anterior calcanei screw with or without a Screw Targeting Clamp. The secondary aim was to evaluate the added value of peroperative 3D imaging over 2D radiographs alone. Twenty Anubifix™ embalmed, human anatomic lower limb specimens were used. A subchondral posterior facet screw and a processus anterior calcanei screw were placed using an extended lateral approach. A senior orthopedic trauma surgeon experienced in calcaneal fracture surgery and a senior resident with limited experience in calcaneal surgery performed screw fixation in five specimens with and in five specimens without the clamp. 2D lateral and axial radiographs and a 3D recording were obtained postoperatively. Anatomical dissection was performed postoperatively as a diagnostic golden standard in order to obtain the factual screw positions. Blinded assessment of quality of fixation was performed by two surgeons. In 2D, eight screws were considered malpositioned when placed with the targeting device versus nine placed freehand. In 3D recordings, two additional screws were malpositioned in each group as compared to the golden standard. As opposed to the senior surgeon, the senior resident seemed to get the best results using the Screw Targeting Clamp (number of malpositioned screws using freehand was eight, and using the targeting clamp five). In nine out of 20 specimens 3D images provided additional information concerning target area and intra-articular placement. Based on the 3D assessment, five additional screws would have required repositioning. Except for one, all screw positions were rated equally after dissection when compared with 3D examinations

  5. Activation cross section and isomeric cross-section ratio for the 151Eu(n,2n)150m,gEu process

    NASA Astrophysics Data System (ADS)

    Luo, Junhua; Li, Suyuan; Jiang, Li

    2018-07-01

    The cross sections of 151Eu(n,2n)150m,gEu reactions and their isomeric cross section ratios σm/σt have been measured experimentally. Cross sections are measured, relative to a reference 93Nb(n,2n)92mNb reaction cross section, by means of the activation technique at three neutron energies 13.5, 14.1, and 14.8 MeV. Monoenergetic neutron beams were formed via the 3H(d,n)4He reaction and both Eu2O3 samples and Nb monitor foils were activated together to determine the reaction cross section and the incident neutron flux. The activities induced in the reaction products were measured using high-resolution gamma ray spectroscopy. Cross sections were also evaluated theoretically using the numerical nuclear model code, TALYS-1.8 with different level density options at neutron energies varying from the reaction threshold to 20 MeV. Results are discussed and compared with the corresponding literature.

  6. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  7. RAINIER: A simulation tool for distributions of excited nuclear states and cascade fluctuations

    NASA Astrophysics Data System (ADS)

    Kirsch, L. E.; Bernstein, L. A.

    2018-06-01

    A new code has been developed named RAINIER that simulates the γ-ray decay of discrete and quasi-continuum nuclear levels for a user-specified range of energy, angular momentum, and parity including a realistic treatment of level spacing and transition width fluctuations. A similar program, DICEBOX, uses the Monte Carlo method to simulate level and width fluctuations but is restricted in its initial level population algorithm. On the other hand, modern reaction codes such as TALYS and EMPIRE populate a wide range of states in the residual nucleus prior to γ-ray decay, but do not go beyond the use of deterministic functions and therefore neglect cascade fluctuations. This combination of capabilities allows RAINIER to be used to determine quasi-continuum properties through comparison with experimental data. Several examples are given that demonstrate how cascade fluctuations influence experimental high-resolution γ-ray spectra from reactions that populate a wide range of initial states.

  8. RAINIER: A simulation tool for distributions of excited nuclear states and cascade fluctuations

    DOE PAGES

    Kirsch, L. E.; Bernstein, L. A.

    2018-03-04

    In this paper, a new code has been developed named RAINIER that simulates the γ-ray decay of discrete and quasi-continuum nuclear levels for a user-specified range of energy, angular momentum, and parity including a realistic treatment of level spacing and transition width fluctuations. A similar program, DICEBOX, uses the Monte Carlo method to simulate level and width fluctuations but is restricted in its initial level population algorithm. On the other hand, modern reaction codes such as TALYS and EMPIRE populate a wide range of states in the residual nucleus prior to γ-ray decay, but do not go beyond the usemore » of deterministic functions and therefore neglect cascade fluctuations. This combination of capabilities allows RAINIER to be used to determine quasi-continuum properties through comparison with experimental data. Finally, several examples are given that demonstrate how cascade fluctuations influence experimental high-resolution γ-ray spectra from reactions that populate a wide range of initial states.« less

  9. RAINIER: A simulation tool for distributions of excited nuclear states and cascade fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirsch, L. E.; Bernstein, L. A.

    In this paper, a new code has been developed named RAINIER that simulates the γ-ray decay of discrete and quasi-continuum nuclear levels for a user-specified range of energy, angular momentum, and parity including a realistic treatment of level spacing and transition width fluctuations. A similar program, DICEBOX, uses the Monte Carlo method to simulate level and width fluctuations but is restricted in its initial level population algorithm. On the other hand, modern reaction codes such as TALYS and EMPIRE populate a wide range of states in the residual nucleus prior to γ-ray decay, but do not go beyond the usemore » of deterministic functions and therefore neglect cascade fluctuations. This combination of capabilities allows RAINIER to be used to determine quasi-continuum properties through comparison with experimental data. Finally, several examples are given that demonstrate how cascade fluctuations influence experimental high-resolution γ-ray spectra from reactions that populate a wide range of initial states.« less

  10. Determination of the Secondary Neutron Flux at the Massive Natural Uranium Spallation Target

    NASA Astrophysics Data System (ADS)

    Zeman, M.; Adam, J.; Baldin, A. A.; Furman, W. I.; Gustov, S. A.; Katovsky, K.; Khushvaktov, J.; Mar`in, I. I.; Novotny, F.; Solnyshkin, A. A.; Tichy, P.; Tsoupko-Sitnikov, V. M.; Tyutyunnikov, S. I.; Vespalec, R.; Vrzalova, J.; Wagner, V.; Zavorka, L.

    The flux of secondary neutrons generated in collisions of the 660 MeV proton beam with the massive natural uranium spallation target was investigated using a set of monoisotopic threshold activation detectors. Sandwiches made of thin high-purity Al, Co, Au, and Bi metal foils were installed in different positions across the whole spallation target. The gamma-ray activity of products of (n,xn) and other studied reactions was measured offline with germanium semiconductor detectors. Reaction yields of radionuclides with half-life exceeding 100 min and with effective neutron energy thresholds between 3.6 MeV and 186 MeV provided us with information about the spectrum of spallation neutrons in this energy region and beyond. The experimental neutron flux was determined using the measured reaction yields and cross-sections calculated with the TALYS 1.8 nuclear reaction program and INCL4-ABLA event generator of MCNP6. Neutron spectra in the region of activation sandwiches were also modeled with the radiation transport code MCNPX 2.7. Neutron flux based on excitation functions from TALYS provides a reasonable description of the neutron spectrum inside the spallation target and is in good agreement with Monte-Carlo predictions. The experimental flux that uses INCL4 cross-sections rather underestimates the modeled spectrum in the whole region of interest, but the agreement within few standard deviations was reached as well. The paper summarizes basic principles of the method for determining the spectrum of high-energy neutrons without employing the spectral adjustment routines and points out to the need for model improvements and precise cross-section measurements.

  11. EMPIRE: A code for nuclear astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A.

    The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the directsemidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.

  12. Activation cross section and isomeric cross section ratio for the 76Ge(n,2n)75m,gGe process

    NASA Astrophysics Data System (ADS)

    Luo, Junhua; Jiang, Li; Wang, Xinxing

    2018-04-01

    We measured neutron-induced reaction cross sections for the 76Ge(n,2n)75m,gGe reactions and their isomeric cross section ratios σm/σg at three neutron energies between 13 and 15MeV by an activation and off-line γ-ray spectrometric technique using the K-400 Neutron Generator at the Chinese Academy of Engineering Physics (CAEP). Ge samples and Nb monitor foils were activated together to determine the reaction cross section and the incident neutron flux. The monoenergetic neutron beams were formed via the 3H( d, n)4He reaction. The pure cross section of the ground state was derived from the absolute cross section of the metastable state and the residual nuclear decay analysis. The cross sections were also calculated using the nuclear model code TALYS-1.8 with different level density options at neutron energies varying from the reaction threshold to 20MeV. Results are discussed and compared with the corresponding literature data.

  13. Uncertainty evaluation of nuclear reaction model parameters using integral and microscopic measurements. Covariances evaluation with CONRAD code

    NASA Astrophysics Data System (ADS)

    de Saint Jean, C.; Habert, B.; Archier, P.; Noguere, G.; Bernard, D.; Tommasi, J.; Blaise, P.

    2010-10-01

    In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic) and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, …) were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.

  14. Production of medical isotopes from a thorium target irradiated by light charged particles up to 70 MeV

    NASA Astrophysics Data System (ADS)

    Duchemin, C.; Guertin, A.; Haddad, F.; Michel, N.; Métivier, V.

    2015-02-01

    The irradiation of a thorium target by light charged particles (protons and deuterons) leads to the production of several isotopes of medical interest. Direct nuclear reaction allows the production of Protactinium-230 which decays to Uranium-230 the mother nucleus of Thorium-226, a promising isotope for alpha radionuclide therapy. The fission of Thorium-232 produces fragments of interest like Molybdenum-99, Iodine-131 and Cadmium-115g. We focus our study on the production of these isotopes, performing new cross section measurements and calculating production yields. Our new sets of data are compared with the literature and the last version of the TALYS code.

  15. Production of medical isotopes from a thorium target irradiated by light charged particles up to 70 MeV.

    PubMed

    Duchemin, C; Guertin, A; Haddad, F; Michel, N; Métivier, V

    2015-02-07

    The irradiation of a thorium target by light charged particles (protons and deuterons) leads to the production of several isotopes of medical interest. Direct nuclear reaction allows the production of Protactinium-230 which decays to Uranium-230 the mother nucleus of Thorium-226, a promising isotope for alpha radionuclide therapy. The fission of Thorium-232 produces fragments of interest like Molybdenum-99, Iodine-131 and Cadmium-115g. We focus our study on the production of these isotopes, performing new cross section measurements and calculating production yields. Our new sets of data are compared with the literature and the last version of the TALYS code.

  16. Determination of neutron capture cross sections of 232Th at 14.1 MeV and 14.8 MeV using the neutron activation method

    NASA Astrophysics Data System (ADS)

    Lan, Chang-Lin; Zhang, Yi; Lv, Tao; Xie, Bao-Lin; Peng, Meng; Yao, Ze-En; Chen, Jin-Gen; Kong, Xiang-Zhong

    2017-04-01

    The 232Th(n, γ)233Th neutron capture reaction cross sections were measured at average neutron energies of 14.1 MeV and 14.8 MeV using the activation method. The neutron flux was determined using the monitor reaction 27Al(n,α)24Na. The induced gamma-ray activities were measured using a low background gamma ray spectrometer equipped with a high resolution HPGe detector. The experimentally determined cross sections were compared with the data in the literature, and the evaluated data of ENDF/B-VII.1, JENDL-4.0u+, and CENDL-3.1. The excitation functions of the 232Th(n,γ)233Th reaction were also calculated theoretically using the TALYS1.6 computer code. Supported by Chinese TMSR Strategic Pioneer Science and Technology Project-The Th-U Fuel Physics Term (XDA02010100) and National Natural Science Foundation of China (11205076, 21327801)

  17. Feasibility study of nuclear transmutation by negative muon capture reaction using the PHITS code

    NASA Astrophysics Data System (ADS)

    Abe, Shin-ichiro; Sato, Tatsuhiko

    2016-06-01

    Feasibility of nuclear transmutation of fission products in high-level radioactive waste by negative muon capture reaction is investigated using the Particle and Heave Ion Transport code System (PHITS). It is found that about 80 % of stopped negative muons contribute to transmute target nuclide into stable or short-lived nuclide in the case of 135Cs, which is one of the most important nuclide in the transmutation. The simulation result also indicates that the position of transmutation is controllable by changing the energy of incident negative muon. Based on our simulation, it takes approximately 8.5 × 108years to transmute 500 g of 135Cs by negative muon beam with the highest intensity currently available.

  18. Calculation of Excitation Function of Some Structural Fusion Material for (n, p) Reactions up to 25 MeV

    NASA Astrophysics Data System (ADS)

    Reshid, Tarik S.

    2013-04-01

    Fusion serves an inexhaustible energy for humankind. Although there have been significant research and development studies on the inertial and magnetic fusion reactor technology, Furthermore, there are not radioactive nuclear waste problems in the fusion reactors. In this study, (n, p) reactions for some structural fusion materials such as 27Al, 51V, 52Cr, 55Mn and 56Fe have been investigated. The new calculations on the excitation functions of 27 Al(n, p) 27 Mg, 51 V(n, p) 51 Ti, 52 Cr(n, p) 52 V, 55 Mn(n, p) 55 Cr and 56 Fe(n, p) 56 Mn reactions have been carried out up to 30 MeV incident neutron energy. Statistical model calculations, based on the Hauser-Feshbach formalism, have been carried out using the TALYS-1.0 and were compared with available experimental data in the literature and with ENDF/B-VII, T = 300 K; JENDL-3.3, T = 300 K and JEFF-3.1, T = 300 K evaluated libraries.

  19. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    NASA Astrophysics Data System (ADS)

    Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains

  20. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Oblozinsky, P.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES

  1. RIPL-Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Capote,R.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains

  2. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  3. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.

    Proton double-differential cross sections from 59Co(α,p) 62Ni, 57Fe(α,p) 60Co, 56Fe( 7Li,p) 62Ni, and 55Mn( 6Li,p) 60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtainedmore » with a Monte Carlo technique. Furthermore, excitation energy dependencies were found to be inconsistent with the Fermi-gas model.« less

  4. Characteristics code for shock initiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Partom, Y.

    1986-10-01

    We developed SHIN, a characteristics code for shock initiation studies. We describe in detail the equations of state, reaction model, rate equations, and numerical difference equations that SHIN incorporates. SHIN uses the previously developed surface burning reaction model which better represents the shock initiation process in TATB, than do bulk reaction models. A large number of computed simulations prove the code is a reliable and efficient tool for shock initiation studies. A parametric study shows the effect on build-up and run distance to detonation of (1) type of boundary condtion, (2) burning velocity curve, (3) shock duration, (4) rise timemore » in ramp loading, (5) initial density (or porosity) of the explosive, (6) initial temperature, and (7) grain size. 29 refs., 65 figs.« less

  5. Development of the Off-line Analysis Code for GODDESS

    NASA Astrophysics Data System (ADS)

    Garland, Heather; Cizewski, Jolie; Lepailleur, Alex; Walters, David; Pain, Steve; Smith, Karl

    2016-09-01

    Determining (n, γ) cross sections on unstable nuclei is important for understanding the r-process that is theorized to occur in supernovae and neutron-star mergers. However, (n, γ) reactions are difficult to measure directly because of the short lifetime of the involved neutron rich nuclei. A possible surrogate for the (n, γ) reaction is the (d,p γ) reaction; the measurement of these reactions in inverse kinematics is part of the scope of GODDESS - Gammasphere ORRUBA (Oak Ridge Rutgers University Barrel Array): Dual Detectors for Experimental Structure Studies. The development of an accurate and efficient off-line analysis code for GODDESS experiments is not only essential, but also provides a unique opportunity to create an analysis code designed specifically for transfer reaction experiments. The off-line analysis code has been developed to produce histograms from the binary data file to determine how to best sort events. Recent developments in the off-line analysis code will be presented as well as details on the energy and position calibrations for the ORRUBA detectors. This work is supported in part by the U.S. Department of Energy and National Science Foundation.

  6. Re-measurement of the 33S(α ,p )36Cl cross section for early solar system nuclide enrichment

    NASA Astrophysics Data System (ADS)

    Anderson, Tyler; Skulski, Michael; Clark, Adam; Nelson, Austin; Ostdiek, Karen; Collon, Philippe; Chmiel, Greg; Woodruff, Tom; Caffee, Marc

    2017-07-01

    Short-lived radionuclides (SLRs) with half-lives less than 100 Myr are known to have existed around the time of the formation of the solar system around 4.5 billion years ago. Understanding the production sources for SLRs is important for improving our understanding of processes taking place just after solar system formation as well as their timescales. Early solar system models rely heavily on calculations from nuclear theory due to a lack of experimental data for the nuclear reactions taking place. In 2013, Bowers et al. measured 36Cl production cross sections via the 33S(α ,p ) reaction and reported cross sections that were systematically higher than predicted by Hauser-Feshbach codes. Soon after, a paper by Peter Mohr highlighted the challenges the new data would pose to current nuclear theory if verified. The 33S(α ,p )36Cl reaction was re-measured at five energies between 0.78 MeV/nucleon and 1.52 MeV/nucleon, in the same range as measured by Bowers et al., and found systematically lower cross sections than originally reported, with the new results in good agreement with the Hauser-Feshbach code talys. Loss of Cl carrier in chemical extraction and errors in determination of reaction energy ranges are both possible explanations for artificially inflated cross sections measured in the previous work.

  7. Neutron scattering cross section measurements for Fe 56

    DOE PAGES

    Ramirez, A. P. D.; Vanhoy, J. R.; Hicks, S. F.; ...

    2017-06-09

    Elastic and inelastic differential cross sections for neutron scattering from 56Fe have been measured for several incident energies from 1.30 to 7.96 MeV at the University of Kentucky Accelerator Laboratory. Scattered neutrons were detected using a C 6D 6 liquid scintillation detector using pulse-shape discrimination and time-of-flight techniques. The deduced cross sections have been compared with previously reported data, predictions from evaluation databases ENDF, JENDL, and JEFF, and theoretical calculations performed using different optical model potentials using the TALYS and EMPIRE nuclear reaction codes. The coupled-channel calculations based on the vibrational and soft-rotor models are found to describe the experimentalmore » (n,n 0) and (n,n 1) cross sections well.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramirez, A. P. D.; Vanhoy, J. R.; Hicks, S. F.

    Elastic and inelastic differential cross sections for neutron scattering from 56Fe have been measured for several incident energies from 1.30 to 7.96 MeV at the University of Kentucky Accelerator Laboratory. Scattered neutrons were detected using a C 6D 6 liquid scintillation detector using pulse-shape discrimination and time-of-flight techniques. The deduced cross sections have been compared with previously reported data, predictions from evaluation databases ENDF, JENDL, and JEFF, and theoretical calculations performed using different optical model potentials using the TALYS and EMPIRE nuclear reaction codes. The coupled-channel calculations based on the vibrational and soft-rotor models are found to describe the experimentalmore » (n,n 0) and (n,n 1) cross sections well.« less

  9. Neutron scattering cross section measurements for 56Fe

    NASA Astrophysics Data System (ADS)

    Ramirez, A. P. D.; Vanhoy, J. R.; Hicks, S. F.; McEllistrem, M. T.; Peters, E. E.; Mukhopadhyay, S.; Harrison, T. D.; Howard, T. J.; Jackson, D. T.; Lenzen, P. D.; Nguyen, T. D.; Pecha, R. L.; Rice, B. G.; Thompson, B. K.; Yates, S. W.

    2017-06-01

    Elastic and inelastic differential cross sections for neutron scattering from 56Fe have been measured for several incident energies from 1.30 to 7.96 MeV at the University of Kentucky Accelerator Laboratory. Scattered neutrons were detected using a C6D6 liquid scintillation detector using pulse-shape discrimination and time-of-flight techniques. The deduced cross sections have been compared with previously reported data, predictions from evaluation databases ENDF, JENDL, and JEFF, and theoretical calculations performed using different optical model potentials using the talys and empire nuclear reaction codes. The coupled-channel calculations based on the vibrational and soft-rotor models are found to describe the experimental (n ,n0 ) and (n ,n1 ) cross sections well.

  10. Overview and evaluation of different nuclear level density models for the 123I radionuclide production.

    PubMed

    Nikjou, A; Sadeghi, M

    2018-06-01

    The 123 I radionuclide (T 1/2 = 13.22 h, β+ = 100%) is one of the most potent gamma emitters for nuclear medicine. In this study, the cyclotron production of this radionuclide via different nuclear reactions namely, the 121 Sb(α,2n), 122 Te(d,n), 123 Te(p,n), 124 Te(p,2n), 124 Xe(p,2n), 127 I(p,5n) and 127 I(d,6n) were investigated. The effect of the various phenomenological nuclear level density models such as Fermi gas model (FGM), Back-shifted Fermi gas model (BSFGM), Generalized superfluid model (GSM) and Enhanced generalized superfluid model (EGSM) moreover, the three microscopic level density models were evaluated for predicting of cross sections and production yield predictions. The SRIM code was used to obtain the target thickness. The 123 I excitation function of reactions were calculated by using of the TALYS-1.8, EMPIRE-3.2 nuclear codes and with data which taken from TENDL-2015 database, and finally the theoretical calculations were compared with reported experimental measurements in which taken from EXFOR database. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. 64Cu, a powerful positron emitter for immunoimaging and theranostic: Production via natZnO and natZnO-NPs.

    PubMed

    Karimi, Zahra; Sadeghi, Mahdi; Mataji-Kojouri, Naimeddin

    2018-07-01

    64 Cu is one of the most beneficial radionuclide that can be used as a theranostic agent in Positron Emission Tomography (PET) imaging. In this current work, 64 Cu was produced with zinc oxide nanoparticles ( nat ZnONPs) and zinc oxide powder ( nat ZnO) via the 64 Zn(n,p) 64 Cu reaction in Tehran Research Reactor (TRR) and the activity values were compared with each other. The theoretical activity of 64 Cu also was calculated with MCNPX-2.6 and the cross sections of this reaction were calculated by using TALYS-1.8, EMPIRE-3.2.2 and ALICE/ASH nuclear codes and were compared with experimental values. Transmission Electronic Microscopy (TEM), Scanning Electronic Microscopy (SEM) and X-Ray Diffraction (XRD) analysis were used for samples characterizations. From these results, it's concluded that 64 Cu activity value with nanoscale target was achieved more than the bulk state target and had a good adaptation with the MCNPX result. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Double differential light charged particle emission cross sections for some structural fusion materials

    NASA Astrophysics Data System (ADS)

    Sarpün, Ismail Hakki; n, Abdullah Aydı; Tel, Eyyup

    2017-09-01

    In fusion reactors, neutron induced radioactivity strongly depends on the irradiated material. So, a proper selection of structural materials will have been limited the radioactive inventory in a fusion reactor. First-wall and blanket components have high radioactivity concentration due to being the most flux-exposed structures. The main objective of fusion structural material research is the development and selection of materials for reactor components with good thermo-mechanical and physical properties, coupled with low-activation characteristics. Double differential light charged particle emission cross section, which is a fundamental data to determine nuclear heating and material damages in structural fusion material research, for some elements target nuclei have been calculated by the TALYS 1.8 nuclear reaction code at 14-15 MeV neutron incident energy and compared with available experimental data in EXFOR library. Direct, compound and pre-equilibrium reaction contribution have been theoretically calculated and dominant contribution have been determined for each emission of proton, deuteron and alpha particle.

  13. Language does not come "in boxes": Assessing discrepancies between adverse drug reactions spontaneous reporting and MedDRA® codes in European Portuguese.

    PubMed

    Inácio, Pedro; Airaksinen, Marja; Cavaco, Afonso

    2015-01-01

    The description of adverse drug reactions (ADRs) by health care professionals (HCPs) can be highly variable. This variation can affect the coding of a reaction with the Medical Dictionary for Regulatory Activities (MedDRA(®)), the gold standard for pharmacovigilance database entries. Ultimately, the strength of a safety signal can be compromised. The objective of this study was to assess: 1) participation of different HCPs in ADR reporting, and 2) variation of language used by HCPs when describing ADRs, and to compare it with the corresponding MedDRA(®) codes. A retrospective content analysis was performed, using the database of spontaneous reports submitted by HCPs in the region of the Southern Pharmacovigilance Unit, Portugal. Data retrieved consisted of the idiomatic description of all ADRs occurring in 2004 (first year of the Unit activity, n = 53) and in 2012 (n = 350). The agreement between the language used by HCPs and the MedDRA(®) dictionary codes was quantitatively assessed. From a total of 403 spontaneous reports received in the two years, 896 words describing ADRs were collected. HCPs presented different levels of pharmacovigilance participation and ADR idiomatic descriptions, with pharmacists providing the greatest overall contribution. The agreement between the language used in spontaneous reports and the corresponding MedDRA(®) terms varied by HCP background, with nurses presenting the poorer results than medical doctors and pharmacists when considering the dictionary as the gold standard in ADRs' language. Lexical accuracy and semantic variations exist between different HCP groups. These differences may interfere with the strength of a generated safety signal. Clinical and MedDRA(®) terminology training should be targeted to increase not only the frequency, but also the quality of spontaneous reports, in accordance with HCPs' experience and background. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A Radiation Shielding Code for Spacecraft and Its Validation

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.

    2000-01-01

    The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.

  15. Liquid rocket combustor computer code development

    NASA Technical Reports Server (NTRS)

    Liang, P. Y.

    1985-01-01

    The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.

  16. Jet-A reaction mechanism study for combustion application

    NASA Technical Reports Server (NTRS)

    Lee, Chi-Ming; Kundu, Krishna; Acosta, Waldo

    1991-01-01

    Simplified chemical kinetic reaction mechanisms for the combustion of Jet A fuel was studied. Initially, 40 reacting species and 118 elementary chemical reactions were chosen based on a literature review. Through a sensitivity analysis with the use of LSENS General Kinetics and Sensitivity Analysis Code, 16 species and 21 elementary chemical reactions were determined from this study. This mechanism is first justified by comparison of calculated ignition delay time with the available shock tube data, then it is validated by comparison of calculated emissions from the plug flow reactor code with in-house flame tube data.

  17. LINE: a code which simulates spectral line shapes for fusion reaction products generated by various speed distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaughter, D.

    1985-03-01

    A computer code is described which estimates the energy spectrum or ''line-shape'' for the charged particles and ..gamma..-rays produced by the fusion of low-z ions in a hot plasma. The simulation has several ''built-in'' ion velocity distributions characteristic of heated plasmas and it also accepts arbitrary speed and angular distributions although they must all be symmetric about the z-axis. An energy spectrum of one of the reaction products (ion, neutron, or ..gamma..-ray) is calculated at one angle with respect to the symmetry axis. The results are shown in tabular form, they are plotted graphically, and the moments of the spectrummore » to order ten are calculated both with respect to the origin and with respect to the mean.« less

  18. ASTEC—the Aarhus STellar Evolution Code

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, Jørgen

    2008-08-01

    The Aarhus code is the result of a long development, starting in 1974, and still ongoing. A novel feature is the integration of the computation of adiabatic oscillations for specified models as part of the code. It offers substantial flexibility in terms of microphysics and has been carefully tested for the computation of solar models. However, considerable development is still required in the treatment of nuclear reactions, diffusion and convective mixing.

  19. Crucial steps to life: From chemical reactions to code using agents.

    PubMed

    Witzany, Guenther

    2016-02-01

    The concepts of the origin of the genetic code and the definitions of life changed dramatically after the RNA world hypothesis. Main narratives in molecular biology and genetics such as the "central dogma," "one gene one protein" and "non-coding DNA is junk" were falsified meanwhile. RNA moved from the transition intermediate molecule into centre stage. Additionally the abundance of empirical data concerning non-random genetic change operators such as the variety of mobile genetic elements, persistent viruses and defectives do not fit with the dominant narrative of error replication events (mutations) as being the main driving forces creating genetic novelty and diversity. The reductionistic and mechanistic views on physico-chemical properties of the genetic code are no longer convincing as appropriate descriptions of the abundance of non-random genetic content operators which are active in natural genetic engineering and natural genome editing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. A Radiation Chemistry Code Based on the Green's Function of the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Stochastic radiation track structure codes are of great interest for space radiation studies and hadron therapy in medicine. These codes are used for a many purposes, notably for microdosimetry and DNA damage studies. In the last two decades, they were also used with the Independent Reaction Times (IRT) method in the simulation of chemical reactions, to calculate the yield of various radiolytic species produced during the radiolysis of water and in chemical dosimeters. Recently, we have developed a Green's function based code to simulate reversible chemical reactions with an intermediate state, which yielded results in excellent agreement with those obtained by using the IRT method. This code was also used to simulate and the interaction of particles with membrane receptors. We are in the process of including this program for use with the Monte-Carlo track structure code Relativistic Ion Tracks (RITRACKS). This recent addition should greatly expand the capabilities of RITRACKS, notably to simulate DNA damage by both the direct and indirect effect.

  1. Study of {sup 179}Hf{sup m2} excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnevsky, I. N.; Zheltonozhsky, V. A., E-mail: zhelton@kinr.kiev.ua; Savrasov, A. N.

    Isomeric ratios of {sup 179}Hf{sup m2,g} yields in the (γ, n) reaction and the cross section for the {sup 179}Hf{sup m2} population in the (α, p) reaction are measured for the first time at the end-point energies of 15.1 and 17.5 MeV for bremsstrahlung photons and 26 MeV for alpha particles. The results are σ = (1.1 ± 0.11) × 10{sup −27} cm{sup 2} for the {sup 176}Lu(α, p){sup 179}Hf{sup m2} reaction and Y{sub m2}/Y{sub g} = (6.1 ± 0.3) × 10{sup −6} and (3.7 ± 0.2) × 10{sup −6} for the {sup 180}Hf(γ, n){sup 179}Hf{sup m22} reaction at E{submore » ep} =15.1 and 17.5 MeV, respectively. The experimental data on the relative {sup 179}Hf{sup m2} yield indicate a single-humped shape of the excitation function for the {sup 180}Hf(γ, n){sup 179}Hf{sup m2} reaction. Simulation is performed using the TALYS-1.4 and EMPIRE-3.2 codes.« less

  2. GCKP84-general chemical kinetics code for gas-phase flow and batch processes including heat transfer effects

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Scullin, V. J.

    1984-01-01

    A general chemical kinetics code is described for complex, homogeneous ideal gas reactions in any chemical system. The main features of the GCKP84 code are flexibility, convenience, and speed of computation for many different reaction conditions. The code, which replaces the GCKP code published previously, solves numerically the differential equations for complex reaction in a batch system or one dimensional inviscid flow. It also solves numerically the nonlinear algebraic equations describing the well stirred reactor. A new state of the art numerical integration method is used for greatly increased speed in handling systems of stiff differential equations. The theory and the computer program, including details of input preparation and a guide to using the code are given.

  3. CEM2k and LAQGSM Codes as Event-Generators for Space Radiation Shield and Cosmic Rays Propagation Applications

    NASA Technical Reports Server (NTRS)

    Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.

    2002-01-01

    Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.

  4. An interactive code (NETPATH) for modeling NET geochemical reactions along a flow PATH, version 2.0

    USGS Publications Warehouse

    Plummer, Niel; Prestemon, Eric C.; Parkhurst, David L.

    1994-01-01

    NETPATH is an interactive Fortran 77 computer program used to interpret net geochemical mass-balance reactions between an initial and final water along a hydrologic flow path. Alternatively, NETPATH computes the mixing proportions of two to five initial waters and net geochemical reactions that can account for the observed composition of a final water. The program utilizes previously defined chemical and isotopic data for waters from a hydrochemical system. For a set of mineral and (or) gas phases hypothesized to be the reactive phases in the system, NETPATH calculates the mass transfers in every possible combination of the selected phases that accounts for the observed changes in the selected chemical and (or) isotopic compositions observed along the flow path. The calculations are of use in interpreting geochemical reactions, mixing proportions, evaporation and (or) dilution of waters, and mineral mass transfer in the chemical and isotopic evolution of natural and environmental waters. Rayleigh distillation calculations are applied to each mass-balance model that satisfies the constraints to predict carbon, sulfur, nitrogen, and strontium isotopic compositions at the end point, including radiocarbon dating. DB is an interactive Fortran 77 computer program used to enter analytical data into NETPATH, and calculate the distribution of species in aqueous solution. This report describes the types of problems that can be solved, the methods used to solve problems, and the features available in the program to facilitate these solutions. Examples are presented to demonstrate most of the applications and features of NETPATH. The codes DB and NETPATH can be executed in the UNIX or DOS1 environment. This report replaces U.S. Geological Survey Water-Resources Investigations Report 91-4078, by Plummer and others, which described the original release of NETPATH, version 1.0 (dated December, 1991), and documents revisions and enhancements that are included in version 2.0. 1 The

  5. A Consistent System for Coding Laboratory Samples

    NASA Astrophysics Data System (ADS)

    Sih, John C.

    1996-07-01

    A formal laboratory coding system is presented to keep track of laboratory samples. Preliminary useful information regarding the sample (origin and history) is gained without consulting a research notebook. Since this system uses and retains the same research notebook page number for each new experiment (reaction), finding and distinguishing products (samples) of the same or different reactions becomes an easy task. Using this system multiple products generated from a single reaction can be identified and classified in a uniform fashion. Samples can be stored and filed according to stage and degree of purification, e.g. crude reaction mixtures, recrystallized samples, chromatographed or distilled products.

  6. Spallation reactions: A successful interplay between modeling and applications

    NASA Astrophysics Data System (ADS)

    David, J.-C.

    2015-06-01

    The spallation reactions are a type of nuclear reaction which occur in space by interaction of the cosmic rays with interstellar bodies. The first spallation reactions induced with an accelerator took place in 1947 at the Berkeley cyclotron (University of California) with 200MeV deuterons and 400MeV alpha beams. They highlighted the multiple emission of neutrons and charged particles and the production of a large number of residual nuclei far different from the target nuclei. In the same year, R. Serber described the reaction in two steps: a first and fast one with high-energy particle emission leading to an excited remnant nucleus, and a second one, much slower, the de-excitation of the remnant. In 2010 IAEA organized a workshop to present the results of the most widely used spallation codes within a benchmark of spallation models. If one of the goals was to understand the deficiencies, if any, in each code, one remarkable outcome points out the overall high-quality level of some models and so the great improvements achieved since Serber. Particle transport codes can then rely on such spallation models to treat the reactions between a light particle and an atomic nucleus with energies spanning from few tens of MeV up to some GeV. An overview of the spallation reactions modeling is presented in order to point out the incomparable contribution of models based on basic physics to numerous applications where such reactions occur. Validations or benchmarks, which are necessary steps in the improvement process, are also addressed, as well as the potential future domains of development. Spallation reactions modeling is a representative case of continuous studies aiming at understanding a reaction mechanism and which end up in a powerful tool.

  7. Color coding of control room displays: the psychocartography of visual layering effects.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2007-06-01

    To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).

  8. Exclusive Reactions Involving Pions and Nucleons

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.; Tripathi, R. K.

    2002-01-01

    The HZETRN code requires inclusive cross sections as input. One of the methods used to calculate these cross sections requires knowledge of all exclusive processes contributing to the inclusive reaction. Conservation laws are used to determine all possible exclusive reactions involving strong interactions between pions and nucleons. Inclusive particle masses are subsequently determined and are needed in cross-section calculations for inclusive pion production.

  9. Proton bombarded reactions of Calcium target nuclei

    NASA Astrophysics Data System (ADS)

    Tel, Eyyup; Sahan, Muhittin; Sarpün, Ismail Hakki; Kavun, Yusuf; Gök, Ali Armagan; Depedelen, Mesut

    2017-09-01

    In this study, proton bombarded nuclear reactions calculations of Calcium target nuclei have been investigated in the incident proton energy range of 1-50 MeV. The excitation functions for 40Ca target nuclei reactions have been calculated by using PCROSS nuclear reaction calculation code. Weisskopf-Ewing and the full exciton models were used for equilibrium and for pre-equilibrium calculations, respectively. The excitation functions for 40Ca target nuclei reactions (p,α), (p,n), (p,p) have been calculated using the semi-empirical formula Tel et al. [5].

  10. A systematic review of validated methods for identifying transfusion-related ABO incompatibility reactions using administrative and claims data.

    PubMed

    Carnahan, Ryan M; Kee, Vicki R

    2012-01-01

    This paper aimed to systematically review algorithms to identify transfusion-related ABO incompatibility reactions in administrative data, with a focus on studies that have examined the validity of the algorithms. A literature search was conducted using PubMed, Iowa Drug Information Service database, and Embase. A Google Scholar search was also conducted because of the difficulty identifying relevant studies. Reviews were conducted by two investigators to identify studies using data sources from the USA or Canada because these data sources were most likely to reflect the coding practices of Mini-Sentinel data sources. One study was found that validated International Classification of Diseases (ICD-9-CM) codes representing transfusion reactions. None of these cases were ABO incompatibility reactions. Several studies consistently used ICD-9-CM code 999.6, which represents ABO incompatibility reactions, and a technical report identified the ICD-10 code for these reactions. One study included the E-code E8760 for mismatched blood in transfusion in the algorithm. Another study reported finding no ABO incompatibility reaction codes in the Healthcare Cost and Utilization Project Nationwide Inpatient Sample database, which contains data of 2.23 million patients who received transfusions, raising questions about the sensitivity of administrative data for identifying such reactions. Two studies reported perfect specificity, with sensitivity ranging from 21% to 83%, for the code identifying allogeneic red blood cell transfusions in hospitalized patients. There is no information to assess the validity of algorithms to identify transfusion-related ABO incompatibility reactions. Further information on the validity of algorithms to identify transfusions would also be useful. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Analysis of isomeric ratios for medium-mass nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danagulyan, A. S.; Hovhannisyan, G. H., E-mail: hov-gohar@ysu.am; Bakhshiyan, T. M.

    Values of the isomeric ratios for product nuclei originating from simple charge-exchange reactions were analyzed. The cross sections for the formation of product nuclei in ground and isomeric states were calculated with the aid of the TALYS 1.4 and EMPIRE 3.2 codes. The calculated values of the isomeric ratios were compared with their experimental counterparts taken from the EXFOR database. For the {sup 86,87}Y, {sup 94,95,96,99}Tc, and {sup 44}Sc nuclei, the experimental values of the isomeric ratios exceed the respective calculated values. The nuclei in question feature weak deformations and have high-spin yrast lines and rotational bands. The possible reasonmore » behind the discrepancy between theoretical and experimental isomeric ratios is that the decay of yrast states leads with a high probability to the formation of isomeric states of detected product nuclei.« less

  12. Modeling chemical gradients in sediments under losing and gaining flow conditions: The GRADIENT code

    NASA Astrophysics Data System (ADS)

    Boano, Fulvio; De Falco, Natalie; Arnon, Shai

    2018-02-01

    Interfaces between sediments and water bodies often represent biochemical hotspots for nutrient reactions and are characterized by steep concentration gradients of different reactive solutes. Vertical profiles of these concentrations are routinely collected to obtain information on nutrient dynamics, and simple codes have been developed to analyze these profiles and determine the magnitude and distribution of reaction rates within sediments. However, existing publicly available codes do not consider the potential contribution of water flow in the sediments to nutrient transport, and their applications to field sites with significant water-borne nutrient fluxes may lead to large errors in the estimated reaction rates. To fill this gap, the present work presents GRADIENT, a novel algorithm to evaluate distributions of reaction rates from observed concentration profiles. GRADIENT is a Matlab code that extends a previously published framework to include the role of nutrient advection, and provides robust estimates of reaction rates in sediments with significant water flow. This work discusses the theoretical basis of the method and shows its performance by comparing the results to a series of synthetic data and to laboratory experiments. The results clearly show that in systems with losing or gaining fluxes, the inclusion of such fluxes is critical for estimating local and overall reaction rates in sediments.

  13. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  14. Production of scandium-44 m and scandium-44 g with deuterons on calcium-44: cross section measurements and production yield calculations.

    PubMed

    Duchemin, C; Guertin, A; Haddad, F; Michel, N; Métivier, V

    2015-09-07

    HIGHLIGHTS • Production of Sc-44 m, Sc-44 g and contaminants. • Experimental values determined using the stacked-foil technique. • Thick-Target production Yield (TTY) calculations. • Comparison with the TALYS code version 1.6.Among the large number of radionuclides of medical interest, Sc-44 is promising for PET imaging. Either the ground-state Sc-44 g or the metastable-state Sc-44 m can be used for such applications, depending on the molecule used as vector. This study compares the production rates of both Sc-44 states, when protons or deuterons are used as projectiles on an enriched Calcium-44 target. This work presents the first set of data for the deuteron route. The results are compared with the TALYS code. The Thick-Target production Yields of Sc-44 m and Sc-44 g are calculated and compared with those for the proton route for three different scenarios: the production of Sc-44 g for conventional PET imaging, its production for the new 3 γ imaging technique developed at the SUBATECH laboratory and the production of a Sc-44 m/Sc-44 g in vivo generator for antibody labelling.

  15. Multiplexed Detection of Cytokines Based on Dual Bar-Code Strategy and Single-Molecule Counting.

    PubMed

    Li, Wei; Jiang, Wei; Dai, Shuang; Wang, Lei

    2016-02-02

    Cytokines play important roles in the immune system and have been regarded as biomarkers. While single cytokine is not specific and accurate enough to meet the strict diagnosis in practice, in this work, we constructed a multiplexed detection method for cytokines based on dual bar-code strategy and single-molecule counting. Taking interferon-γ (IFN-γ) and tumor necrosis factor-α (TNF-α) as model analytes, first, the magnetic nanobead was functionalized with the second antibody and primary bar-code strands, forming a magnetic nanoprobe. Then, through the specific reaction of the second antibody and the antigen that fixed by the primary antibody, sandwich-type immunocomplex was formed on the substrate. Next, the primary bar-code strands as amplification units triggered multibranched hybridization chain reaction (mHCR), producing nicked double-stranded polymers with multiple branched arms, which were served as secondary bar-code strands. Finally, the secondary bar-code strands hybridized with the multimolecule labeled fluorescence probes, generating enhanced fluorescence signals. The numbers of fluorescence dots were counted one by one for quantification with epi-fluorescence microscope. By integrating the primary and secondary bar-code-based amplification strategy and the multimolecule labeled fluorescence probes, this method displayed an excellent sensitivity with the detection limits were both 5 fM. Unlike the typical bar-code assay that the bar-code strands should be released and identified on a microarray, this method is more direct. Moreover, because of the selective immune reaction and the dual bar-code mechanism, the resulting method could detect the two targets simultaneously. Multiple analysis in human serum was also performed, suggesting that our strategy was reliable and had a great potential application in early clinical diagnosis.

  16. First cross-section measurements of the reactions Ag,109107(p ,γ )Cd,110108 at energies relevant to the p process

    NASA Astrophysics Data System (ADS)

    Khaliel, A.; Mertzimekis, T. J.; Asimakopoulou, E.-M.; Kanellakopoulos, A.; Lagaki, V.; Psaltis, A.; Psyrra, I.; Mavrommatis, E.

    2017-09-01

    Background: One of the primary objectives of the field of Nuclear Astrophysics is the study of the elemental and isotopic abundances in the universe. Although significant progress has been made in understanding the mechanisms behind the production of a large number of nuclides in the isotopic chart, there are still many open questions regarding a number of neutron-deficient nuclei, the p nuclei. To that end, experimentally deduced nuclear reaction cross sections can provide invaluable input to astrophysical models. Purpose: The reactions Ag,109107(p ,γ )Cd,110108 have been studied at energies inside the astrophysically relevant energy window in an attempt to provide experimental data required for the testing of reaction-rate predictions in terms of the statistical model of Hauser-Feshbach around the p nucleus 108Cd. Methods: The experiments were performed with in-beam γ -ray spectroscopy with proton beams accelerated by the Tandem Van de Graaff Accelerator at NCSR "Demokritos" impinging a target of natural silver. A set of high-purity germanium detectors was employed to record the emitted radiation. Results: A first set of total cross-section measurements in radiative proton-capture reactions involving Ag,109107, producing the p -nucleus 108Cd, inside the astrophysically relevant energy window is reported. The experimental results are compared to theoretical calculations, using talys. An overall good agreement between the data and the theoretical calculations has been found. Conclusions: The results reported in this work add new information to the relatively unexplored p process. The present measurements can serve as a reference point in understanding the nuclear parameters in the related astrophysical environments and for future theoretical modeling and experimental works.

  17. Quasicontinuum γ decay of Zr 91 , 92 : Benchmarking indirect ( n , γ ) cross section measurements for the s process

    DOE PAGES

    Guttormsen, M.; Goriely, S.; Larsen, A. C.; ...

    2017-08-21

    Here, nuclear level densities (NLDs) and γ-ray strength functions (γSFs) have been extracted from particle-γ coincidences of the 92Zr(p,p´γ) 92Zr and 92Zr (p,dγ) 91Zr reactions using the Oslo method. The new 91,92Zr γSF data, combined with photonuclear cross sections, cover the whole energy range from Eγ ≈ 1.5 MeV up to the giant dipole resonance at Eγ ≈ 17 MeV. The wide-range γSF data display structures at Eγ ≈ 9.5 MeV, compatible with a superposition of the spin-flip M1 resonance and a pygmy E1 resonance. Furthermore, the γSF shows a minimum at Eγ ≈ 2–3 MeV and an increase atmore » lower γ-ray energies. The experimentally constrained NLDs and γSFs are shown to reproduce known (n,γ) and Maxwellian-averaged cross sections for 91,92Zr using the TALYS reaction code, thus serving as a benchmark for this indirect method of estimating (n,γ) cross sections for Zr isotopes.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guttormsen, M.; Goriely, S.; Larsen, A. C.

    Here, nuclear level densities (NLDs) and γ-ray strength functions (γSFs) have been extracted from particle-γ coincidences of the 92Zr(p,p´γ) 92Zr and 92Zr (p,dγ) 91Zr reactions using the Oslo method. The new 91,92Zr γSF data, combined with photonuclear cross sections, cover the whole energy range from Eγ ≈ 1.5 MeV up to the giant dipole resonance at Eγ ≈ 17 MeV. The wide-range γSF data display structures at Eγ ≈ 9.5 MeV, compatible with a superposition of the spin-flip M1 resonance and a pygmy E1 resonance. Furthermore, the γSF shows a minimum at Eγ ≈ 2–3 MeV and an increase atmore » lower γ-ray energies. The experimentally constrained NLDs and γSFs are shown to reproduce known (n,γ) and Maxwellian-averaged cross sections for 91,92Zr using the TALYS reaction code, thus serving as a benchmark for this indirect method of estimating (n,γ) cross sections for Zr isotopes.« less

  19. OntoADR a semantic resource describing adverse drug reactions to support searching, coding, and information retrieval.

    PubMed

    Souvignet, Julien; Declerck, Gunnar; Asfari, Hadyl; Jaulent, Marie-Christine; Bousquet, Cédric

    2016-10-01

    Efficient searching and coding in databases that use terminological resources requires that they support efficient data retrieval. The Medical Dictionary for Regulatory Activities (MedDRA) is a reference terminology for several countries and organizations to code adverse drug reactions (ADRs) for pharmacovigilance. Ontologies that are available in the medical domain provide several advantages such as reasoning to improve data retrieval. The field of pharmacovigilance does not yet benefit from a fully operational ontology to formally represent the MedDRA terms. Our objective was to build a semantic resource based on formal description logic to improve MedDRA term retrieval and aid the generation of on-demand custom groupings by appropriately and efficiently selecting terms: OntoADR. The method consists of the following steps: (1) mapping between MedDRA terms and SNOMED-CT, (2) generation of semantic definitions using semi-automatic methods, (3) storage of the resource and (4) manual curation by pharmacovigilance experts. We built a semantic resource for ADRs enabling a new type of semantics-based term search. OntoADR adds new search capabilities relative to previous approaches, overcoming the usual limitations of computation using lightweight description logic, such as the intractability of unions or negation queries, bringing it closer to user needs. Our automated approach for defining MedDRA terms enabled the association of at least one defining relationship with 67% of preferred terms. The curation work performed on our sample showed an error level of 14% for this automated approach. We tested OntoADR in practice, which allowed us to build custom groupings for several medical topics of interest. The methods we describe in this article could be adapted and extended to other terminologies which do not benefit from a formal semantic representation, thus enabling better data retrieval performance. Our custom groupings of MedDRA terms were used while performing signal

  20. Reactive transport codes for subsurface environmental simulation

    DOE PAGES

    Steefel, C. I.; Appelo, C. A. J.; Arora, B.; ...

    2014-09-26

    A general description of the mathematical and numerical formulations used in modern numerical reactive transport codes relevant for subsurface environmental simulations is presented. The formulations are followed by short descriptions of commonly used and available subsurface simulators that consider continuum representations of flow, transport, and reactions in porous media. These formulations are applicable to most of the subsurface environmental benchmark problems included in this special issue. The list of codes described briefly here includes PHREEQC, HPx, PHT3D, OpenGeoSys (OGS), HYTEC, ORCHESTRA, TOUGHREACT, eSTOMP, HYDROGEOCHEM, CrunchFlow, MIN3P, and PFLOTRAN. The descriptions include a high-level list of capabilities for each of themore » codes, along with a selective list of applications that highlight their capabilities and historical development.« less

  1. Application of the DART Code for the Assessment of Advanced Fuel Behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Totev, T.

    2007-07-01

    The Dispersion Analysis Research Tool (DART) code is a dispersion fuel analysis code that contains mechanistically-based fuel and reaction-product swelling models, a one dimensional heat transfer analysis, and mechanical deformation models. DART has been used to simulate the irradiation behavior of uranium oxide, uranium silicide, and uranium molybdenum aluminum dispersion fuels, as well as their monolithic counterparts. The thermal-mechanical DART code has been validated against RERTR tests performed in the ATR for irradiation data on interaction thickness, fuel, matrix, and reaction product volume fractions, and plate thickness changes. The DART fission gas behavior model has been validated against UO{sub 2}more » fission gas release data as well as measured fission gas-bubble size distributions. Here DART is utilized to analyze various aspects of the observed bubble growth in U-Mo/Al interaction product. (authors)« less

  2. Development of a Monte Carlo code for the data analysis of the 18F(p,α)15O reaction at astrophysical energies

    NASA Astrophysics Data System (ADS)

    Caruso, A.; Cherubini, S.; Spitaleri, C.; Crucillà, V.; Gulino, M.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; de Séréville, N.

    2015-02-01

    Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called "narrow systems" because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of "hot hydrogen burning" are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as 13N and 18F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of 18F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of 18F . Among these, the 18F(p,α)15O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the 18F(p,α)15O reaction, using a beam of 18F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code developed to be used in the data analysis process.

  3. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    which provides the relationships between the predictions of a kinetics model and the input parameters of the problem. LSENS provides for efficient and accurate chemical kinetics computations and includes sensitivity analysis for a variety of problems, including nonisothermal conditions. LSENS replaces the previous NASA general chemical kinetics codes GCKP and GCKP84. LSENS is designed for flexibility, convenience and computational efficiency. A variety of chemical reaction models can be considered. The models include static system, steady one-dimensional inviscid flow, reaction behind an incident shock wave including boundary layer correction, and the perfectly stirred (highly backmixed) reactor. In addition, computations of equilibrium properties can be performed for the following assigned states, enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static problems LSENS computes sensitivity coefficients with respect to the initial values of the dependent variables and/or the three rates coefficient parameters of each chemical reaction. To integrate the ODEs describing chemical kinetics problems, LSENS uses the packaged code LSODE, the Livermore Solver for Ordinary Differential Equations, because it has been shown to be the most efficient and accurate code for solving such problems. The sensitivity analysis computations use the decoupled direct method, as implemented by Dunker and modified by Radhakrishnan. This method has shown greater efficiency and stability with equal or better accuracy than other methods of sensitivity analysis. LSENS is written in FORTRAN 77 with the exception of the NAMELIST extensions used for input. While this makes the code fairly machine independent, execution times on IBM PC compatibles would be unacceptable to most users. LSENS has been successfully implemented on a Sun4 running SunOS and a DEC VAX running VMS. With minor modifications, it should also be easily implemented on other

  4. Experimental measurements with Monte Carlo corrections and theoretical calculations of neutron inelastic scattering cross section of 115In

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Xiao, Jun; Luo, Xiaobing

    2016-10-01

    The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.

  5. Fission Reaction Event Yield Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagmann, Christian; Verbeke, Jerome; Vogt, Ramona

    FREYA (Fission Reaction Event Yield Algorithm) is a code that simulated the decay of a fissionable nucleus at specified excitation energy. In its present form, FREYA models spontaneous fission and neutron-induced fission up to 20 MeV. It includes the possibility of neutron emission from the nuclear prior to its fussion (nth chance fission).

  6. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam; Sundararaghavan, Veera

    2015-06-01

    In this talk, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for sod shock and ZND strong detonation models and then used to perform 2D and 3D shock simulations. We will present benchmark problems for geometries in which a single HMX crystal is subjected to a shock condition. Our current progress towards developing microstructural models of HMX/binder composite will also be discussed.

  7. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less

  8. Dipole strength in 80Se below the neutron-separation energy for the nuclear transmutation of 79Se

    NASA Astrophysics Data System (ADS)

    Makinaga, Ayano; Massarczyk, Ralph; Beard, Mary; Schwengner, Ronald; Otsu, Hideaki; Müller, Stefan; Röder, Marko; Schmidt, Konrad; Wagner, Andreas

    2017-09-01

    The γ-ray strength function (γSF) in 80Se is an important parameter to estimate the neutron-capture cross section of 79Se which is one of the long-lived fission products (LLFPs). Until now, the γSF method was applied for 80Se only above the neutron-separation energy (Sn) and the evaluated 79Se(n,γ) cross section has an instability caused by the GSF below Sn. We studied the dipole-strength distribution of 80Se in a photon-scattering experiment using bremsstrahlung produced by an electron beam of an energy of 11.5 MeV at the linear accelerator ELBE at HZDR. The present photoabsorption cross section of 80Se was combined with results of (γ,n) experiments and are compared with predictions usinmg the TALYS code. We also estimated the 79Se(n,γ) cross sections and compare them with TALYS predictionms and earlier work by other groups.

  9. Resident Reactions to Person-Centered Communication by Long-Term Care Staff.

    PubMed

    Savundranayagam, Marie Y; Sibalija, Jovana; Scotchmer, Emma

    2016-09-01

    Long-term care staff caregivers who are person centered incorporate the life history, preferences, and feelings of residents with dementia during care interactions. Communication is essential for person-centered care. However, little is known about residents' verbal reactions when staff use person-centered communication. Accordingly, this study investigated the impact of person-centered communication and missed opportunities for such communication by staff on resident reactions. Conversations (N = 46) between staff-resident dyads were audio-recorded during routine care tasks over 12 weeks. Staff utterances were coded for person-centered communication and missed opportunities. Resident utterances were coded for positive reactions, such as cooperation, and negative reactions, such as distress. Linear regression analyses revealed that the more staff used person-centered communication, the more likely that residents reacted positively. Additionally, the more missed opportunities in a conversation, the more likely that the residents reacted negatively. Conversation illustrations elaborate on the quantitative findings and implications for staff training are discussed. © The Author(s) 2016.

  10. Photoneutron Reaction Data for Nuclear Physics and Astrophysics

    NASA Astrophysics Data System (ADS)

    Utsunomiya, Hiroaki; Renstrøm, Therese; Tveten, Gry Merete; Gheorghe, Ioana; Filipescu, Dan Mihai; Belyshev, Sergey; Stopani, Konstantin; Wang, Hongwei; Fan, Gongtao; Lui, Yiu-Wing; Symochko, Dmytro; Goriely, Stephane; Larsen, Ann-Cecilie; Siem, Sunniva; Varlamov, Vladimir; Ishkhanov, Boris; Glodariu, Tudor; Krzysiek, Mateusz; Takenaka, Daiki; Ari-izumi, Takashi; Amano, Sho; Miyamoto, Shuji

    2018-05-01

    We discuss the role of photoneutron reaction data in nuclear physics and astrophysics in conjunction with the Coordinated Research Project of the International Atomic Energy Agency with the code F41032 (IAEA-CRP F41032).

  11. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  12. Incorporation of coupled nonequilibrium chemistry into a two-dimensional nozzle code (SEAGULL)

    NASA Technical Reports Server (NTRS)

    Ratliff, A. W.

    1979-01-01

    A two-dimensional multiple shock nozzle code (SEAGULL) was extended to include the effects of finite rate chemistry. The basic code that treats multiple shocks and contact surfaces was fully coupled with a generalized finite rate chemistry and vibrational energy exchange package. The modified code retains all of the original SEAGULL features plus the capability to treat chemical and vibrational nonequilibrium reactions. Any chemical and/or vibrational energy exchange mechanism can be handled as long as thermodynamic data and rate constants are available for all participating species.

  13. 94 Mo(γ,n) and 90Zr(γ,n) cross-section measurements towards understanding the origin of p-nuclei

    NASA Astrophysics Data System (ADS)

    Meekins, E.; Banu, A.; Karwowski, H.; Silano, J.; Zimmerman, W.; Muller, J.; Rich, G.; Bhike, M.; Tornow, W.; McClesky, M.; Travaglio, C.

    2014-09-01

    The nucleosynthesis beyond iron of the rarest stable isotopes in the cosmos, the so-called p-nuclei, is one of the forefront topics in nuclear astrophysics. Recently, a stellar source was found that, for the first time, was able to produce both light and heavy p-nuclei almost at the same level as 56Fe, including the most debated 92,94Mo and 96,98Ru; it was also found that there is an important contribution from the p-process nucleosynthesis to the neutron magic nucleus 90Zr. We focus here on constraining the origin of p-nuclei through nuclear physics by studying two key astrophysical photoneutron reaction cross sections for 94Mo(γ,n) and 90Zr(γ,n). Their energy dependencies were measured using quasi-monochromatic photon beams from Duke University's High Intensity Gamma-ray Source facility at the respective neutron threshold energies up to 18 MeV. Preliminary results of these experimental cross sections will be presented along with their comparison to predictions by a statistical model based on the Hauser-Feshbach formalism implemented in codes like TALYS and SMARAGD. The nucleosynthesis beyond iron of the rarest stable isotopes in the cosmos, the so-called p-nuclei, is one of the forefront topics in nuclear astrophysics. Recently, a stellar source was found that, for the first time, was able to produce both light and heavy p-nuclei almost at the same level as 56Fe, including the most debated 92,94Mo and 96,98Ru; it was also found that there is an important contribution from the p-process nucleosynthesis to the neutron magic nucleus 90Zr. We focus here on constraining the origin of p-nuclei through nuclear physics by studying two key astrophysical photoneutron reaction cross sections for 94Mo(γ,n) and 90Zr(γ,n). Their energy dependencies were measured using quasi-monochromatic photon beams from Duke University's High Intensity Gamma-ray Source facility at the respective neutron threshold energies up to 18 MeV. Preliminary results of these experimental cross

  14. Amino acid fermentation at the origin of the genetic code.

    PubMed

    de Vladar, Harold P

    2012-02-10

    There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP) yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can act and refine the assignments

  15. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  16. Modeling shock-driven reaction in low density PMDI foam

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Alexander, C. Scott; Reinhart, William; Peterson, David

    Shock experiments on low density polyurethane foams reveal evidence of reaction at low impact pressures. However, these reaction thresholds are not evident over the low pressures reported for historical Hugoniot data of highly distended polyurethane at densities below 0.1 g/cc. To fill this gap, impact data given in a companion paper for polymethylene diisocyanate (PMDI) foam with a density of 0.087 g/cc were acquired for model validation. An equation of state (EOS) was developed to predict the shock response of these highly distended materials over the full range of impact conditions representing compaction of the inert material, low-pressure decomposition, and compression of the reaction products. A tabular SESAME EOS of the reaction products was generated using the JCZS database in the TIGER equilibrium code. In particular, the Arrhenius Burn EOS, a two-state model which transitions from an unreacted to a reacted state using single step Arrhenius kinetics, as implemented in the shock physics code CTH, was modified to include a statistical distribution of states. Hence, a single EOS is presented that predicts the onset to reaction due to shock loading in PMDI-based polyurethane foams. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract DE-AC04-94AL85000.

  17. Quantum dynamics of tunneling dominated reactions at low temperatures

    NASA Astrophysics Data System (ADS)

    Hazra, Jisha; Balakrishnan, N.

    2015-05-01

    We report a quantum dynamics study of the Li + HF → LiF + H reaction at low temperatures of interest to cooling and trapping experiments. Contributions from non-zero partial waves are analyzed and results show narrow resonances in the energy dependence of the cross section that survive partial wave summation. The computations are performed using the ABC code and a simple modification of the ABC code that enables separate energy cutoffs for the reactant and product rovibrational energy levels is found to dramatically reduce the basis set size and computational expense. Results obtained using two ab initio electronic potential energy surfaces for the LiHF system show strong sensitivity to the choice of the potential. In particular, small differences in the barrier heights of the two potential surfaces are found to dramatically influence the reaction cross sections at low energies. Comparison with recent measurements of the reaction cross section (Bobbenkamp et al 2011 J. Chem. Phys. 135 204306) shows similar energy dependence in the threshold regime and an overall good agreement with experimental data compared to previous theoretical results. Also, usefulness of a recently introduced method for ultracold reactions that employ the quantum close-coupling method at short-range and the multichannel quantum defect theory at long-range, is demonstrated in accurately evaluating product state-resolved cross sections for D + H2 and H + D2 reactions.

  18. Cross section of the 197Au(n,2n)196Au reaction

    NASA Astrophysics Data System (ADS)

    Kalamara, A.; Vlastou, R.; Kokkoris, M.; Diakaki, M.; Serris, M.; Patronis, N.; Axiotis, M.; Lagoyannis, A.

    2017-09-01

    The 197Au(n,2n)196Au reaction cross section has been measured at two energies, namely at 17.1 MeV and 20.9 MeV, by means of the activation technique, relative to the 27Al(n,α)24Na reference reaction cross section. Quasi-monoenergetic neutron beams were produced at the 5.5 MV Tandem T11/25 accelerator laboratory of NCSR "Demokritos", by means of the 3H(d,n)4He reaction, implementing a new Ti-tritiated target of ˜ 400 GBq activity. The induced γ-ray activity at the targets and reference foils has been measured with HPGe detectors. The cross section for the population of the second isomeric (12-) state m2 of 196Au was independently determined. Auxiliary Monte Carlo simulations were performed using the MCNP code. The present results are in agreement with previous experimental data and with theoretical calculations of the measured reaction cross sections, which were carried out with the use of the EMPIRE code.

  19. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK

    PubMed Central

    2014-01-01

    Background Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system’s set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This “code-based” approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. Results As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. Conclusions The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block

  20. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK.

    PubMed

    Wang, Kaier; Steyn-Ross, Moira L; Steyn-Ross, D Alistair; Wilson, Marcus T; Sleigh, Jamie W; Shiraishi, Yoichi

    2014-04-11

    Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system's set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This "code-based" approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming

  1. KEWPIE2: A cascade code for the study of dynamical decay of excited nuclei

    NASA Astrophysics Data System (ADS)

    Lü, Hongliang; Marchix, Anthony; Abe, Yasuhisa; Boilley, David

    2016-03-01

    KEWPIE-a cascade code devoted to investigating the dynamical decay of excited nuclei, specially designed for treating very low probability events related to the synthesis of super-heavy nuclei formed in fusion-evaporation reactions-has been improved and rewritten in C++ programming language to become KEWPIE2. The current version of the code comprises various nuclear models concerning the light-particle emission, fission process and statistical properties of excited nuclei. General features of the code, such as the numerical scheme and the main physical ingredients, are described in detail. Some typical calculations having been performed in the present paper clearly show that theoretical predictions are generally in accordance with experimental data. Furthermore, since the values of some input parameters cannot be determined neither theoretically nor experimentally, a sensibility analysis is presented. To this end, we systematically investigate the effects of using different parameter values and reaction models on the final results. As expected, in the case of heavy nuclei, the fission process has the most crucial role to play in theoretical predictions. This work would be essential for numerical modeling of fusion-evaporation reactions.

  2. Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; White, Todd; Mangini, Nancy

    2009-01-01

    Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.

  3. PLATYPUS: A code for reaction dynamics of weakly-bound nuclei at near-barrier energies within a classical dynamical model

    NASA Astrophysics Data System (ADS)

    Diaz-Torres, Alexis

    2011-04-01

    A self-contained Fortran-90 program based on a three-dimensional classical dynamical reaction model with stochastic breakup is presented, which is a useful tool for quantifying complete and incomplete fusion, and breakup in reactions induced by weakly-bound two-body projectiles near the Coulomb barrier. The code calculates (i) integrated complete and incomplete fusion cross sections and their angular momentum distribution, (ii) the excitation energy distribution of the primary incomplete-fusion products, (iii) the asymptotic angular distribution of the incomplete-fusion products and the surviving breakup fragments, and (iv) breakup observables, such as angle, kinetic energy and relative energy distributions. Program summaryProgram title: PLATYPUS Catalogue identifier: AEIG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 342 No. of bytes in distributed program, including test data, etc.: 344 124 Distribution format: tar.gz Programming language: Fortran-90 Computer: Any Unix/Linux workstation or PC with a Fortran-90 compiler Operating system: Linux or Unix RAM: 10 MB Classification: 16.9, 17.7, 17.8, 17.11 Nature of problem: The program calculates a wide range of observables in reactions induced by weakly-bound two-body nuclei near the Coulomb barrier. These include integrated complete and incomplete fusion cross sections and their spin distribution, as well as breakup observables (e.g. the angle, kinetic energy, and relative energy distributions of the fragments). Solution method: All the observables are calculated using a three-dimensional classical dynamical model combined with the Monte Carlo sampling of probability-density distributions. See Refs. [1,2] for further details. Restrictions: The

  4. A comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reaction flows

    NASA Technical Reports Server (NTRS)

    Magnotti, F.; Diskin, G.; Matulaitis, J.; Chinitz, W.

    1984-01-01

    The use of silane (SiH4) as an effective ignitor and flame stabilizing pilot fuel is well documented. A reliable chemical kinetic mechanism for prediction of its behavior at the conditions encountered in the combustor of a SCRAMJET engine was calculated. The effects of hydrogen addition on hydrocarbon ignition and flame stabilization as a means for reduction of lengthy ignition delays and reaction times were studied. The ranges of applicability of chemical kinetic models of hydrogen-air combustors were also investigated. The CHARNAL computer code was applied to the turbulent reaction rate modeling.

  5. Step-by-Step Simulation of Radiation Chemistry Using Green Functions for Diffusion-Influenced Reactions

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2011-01-01

    Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.

  6. SkyNet: A Modular Nuclear Reaction Network Library

    NASA Astrophysics Data System (ADS)

    Lippuner, Jonas; Roberts, Luke F.

    2017-12-01

    Almost all of the elements heavier than hydrogen that are present in our solar system were produced by nuclear burning processes either in the early universe or at some point in the life cycle of stars. In all of these environments, there are dozens to thousands of nuclear species that interact with each other to produce successively heavier elements. In this paper, we present SkyNet, a new general-purpose nuclear reaction network that evolves the abundances of nuclear species under the influence of nuclear reactions. SkyNet can be used to compute the nucleosynthesis evolution in all astrophysical scenarios where nucleosynthesis occurs. SkyNet is free and open source, and aims to be easy to use and flexible. Any list of isotopes can be evolved, and SkyNet supports different types of nuclear reactions. SkyNet is modular so that new or existing physics, like nuclear reactions or equations of state, can easily be added or modified. Here, we present in detail the physics implemented in SkyNet with a focus on a self-consistent transition to and from nuclear statistical equilibrium to non-equilibrium nuclear burning, our implementation of electron screening, and coupling of the network to an equation of state. We also present comprehensive code tests and comparisons with existing nuclear reaction networks. We find that SkyNet agrees with published results and other codes to an accuracy of a few percent. Discrepancies, where they exist, can be traced to differences in the physics implementations.

  7. Amino acid fermentation at the origin of the genetic code

    PubMed Central

    2012-01-01

    There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP) yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can act and refine the assignments

  8. Measurements of 67Ga production cross section induced by protons on natZn in the low energy range from 1.678 to 2.444 MeV

    NASA Astrophysics Data System (ADS)

    Wachter, J. A.; Miranda, P. A.; Morales, J. R.; Cancino, S. A.; Correa, R.

    2015-02-01

    The experimental production cross section for the reaction natZn(p,x)67Ga has been measured in the energy range from 1.678 to 2.444 MeV. The methodology used in this work is based on characteristic X-ray emitted after irradiation by the daughter nuclei that decays by electron capture (EC) and the use of a complementary PIXE experiment. By doing so, expressions needed to determine cross section values are simplified since experimental factors such as geometric setup and an detector efficiency are avoided. 67Ga is a radionuclide particularly suited for this method since it decays by electron capture in 100% and the subsequent characteristic X-ray emission is easily detected. Natural zinc targets were fabricated by PVD technique and afterwards their thicknesses were determined by Rutherford Backscattering Spectrometry. Cross sections measurements were carried out by using the Van de Graaff accelerator located at Faculty of Sciences, University of Chile. It was found that our data for the natZn(p,x)67Ga reaction are, in general, in good agreement when compared to existing experimental data and to those calculated ALICE/ASH nuclear code. On the other hand, values predicted by Talys-1.6 are showing systematically lower magnitudes than our measured data.

  9. Neutrino-induced reactions on nuclei

    NASA Astrophysics Data System (ADS)

    Gallmeister, K.; Mosel, U.; Weil, J.

    2016-09-01

    Background: Long-baseline experiments such as the planned deep underground neutrino experiment (DUNE) require theoretical descriptions of the complete event in a neutrino-nucleus reaction. Since nuclear targets are used this requires a good understanding of neutrino-nucleus interactions. Purpose: Develop a consistent theory and code framework for the description of lepton-nucleus interactions that can be used to describe not only inclusive cross sections, but also the complete final state of the reaction. Methods: The Giessen-Boltzmann-Uehling-Uhlenbeck (GiBUU) implementation of quantum-kinetic transport theory is used, with improvements in its treatment of the nuclear ground state and of 2p2h interactions. For the latter an empirical structure function from electron scattering data is used as a basis. Results: Results for electron-induced inclusive cross sections are given as a necessary check for the overall quality of this approach. The calculated neutrino-induced inclusive double-differential cross sections show good agreement data from neutrino and antineutrino reactions for different neutrino flavors at MiniBooNE and T2K. Inclusive double-differential cross sections for MicroBooNE, NOvA, MINERvA, and LBNF/DUNE are given. Conclusions: Based on the GiBUU model of lepton-nucleus interactions a good theoretical description of inclusive electron-, neutrino-, and antineutrino-nucleus data over a wide range of energies, different neutrino flavors, and different experiments is now possible. Since no tuning is involved this theory and code should be reliable also for new energy regimes and target masses.

  10. ICC-CLASS: isotopically-coded cleavable crosslinking analysis software suite

    PubMed Central

    2010-01-01

    Background Successful application of crosslinking combined with mass spectrometry for studying proteins and protein complexes requires specifically-designed crosslinking reagents, experimental techniques, and data analysis software. Using isotopically-coded ("heavy and light") versions of the crosslinker and cleavable crosslinking reagents is analytically advantageous for mass spectrometric applications and provides a "handle" that can be used to distinguish crosslinked peptides of different types, and to increase the confidence of the identification of the crosslinks. Results Here, we describe a program suite designed for the analysis of mass spectrometric data obtained with isotopically-coded cleavable crosslinkers. The suite contains three programs called: DX, DXDX, and DXMSMS. DX searches the mass spectra for the presence of ion signal doublets resulting from the light and heavy isotopic forms of the isotopically-coded crosslinking reagent used. DXDX searches for possible mass matches between cleaved and uncleaved isotopically-coded crosslinks based on the established chemistry of the cleavage reaction for a given crosslinking reagent. DXMSMS assigns the crosslinks to the known protein sequences, based on the isotopically-coded and un-coded MS/MS fragmentation data of uncleaved and cleaved peptide crosslinks. Conclusion The combination of these three programs, which are tailored to the analytical features of the specific isotopically-coded cleavable crosslinking reagents used, represents a powerful software tool for automated high-accuracy peptide crosslink identification. See: http://www.creativemolecules.com/CM_Software.htm PMID:20109223

  11. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  12. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Astrophysics Data System (ADS)

    Chitsomboon, Tawit

    1992-02-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  13. A MATLAB-based finite-element visualization of quantum reactive scattering. I. Collinear atom-diatom reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warehime, Mick; Alexander, Millard H., E-mail: mha@umd.edu

    We restate the application of the finite element method to collinear triatomic reactive scattering dynamics with a novel treatment of the scattering boundary conditions. The method provides directly the reactive scattering wave function and, subsequently, the probability current density field. Visualizing these quantities provides additional insight into the quantum dynamics of simple chemical reactions beyond simplistic one-dimensional models. Application is made here to a symmetric reaction (H+H{sub 2}), a heavy-light-light reaction (F+H{sub 2}), and a heavy-light-heavy reaction (F+HCl). To accompany this article, we have written a MATLAB code which is fast, simple enough to be accessible to a wide audience,more » as well as generally applicable to any problem that can be mapped onto a collinear atom-diatom reaction. The code and user's manual are available for download from http://www2.chem.umd.edu/groups/alexander/FEM.« less

  14. Elastic and inelastic scattering of neutrons from 56Fe

    NASA Astrophysics Data System (ADS)

    Ramirez, Anthony Paul; McEllistrem, M. T.; Liu, S. H.; Mukhopadhyay, S.; Peters, E. E.; Yates, S. W.; Vanhoy, J. R.; Harrison, T. D.; Rice, B. G.; Thompson, B. K.; Hicks, S. F.; Howard, T. J.; Jackson, D. T.; Lenzen, P. D.; Nguyen, T. D.; Pecha, R. L.

    2015-10-01

    The differential cross sections for elastic and inelastic scattered neutrons from 56Fe have been measured at the University of Kentucky Accelerator Laboratory (www.pa.uky.edu/accelerator) for incident neutron energies between 2.0 and 8.0 MeV and for the angular range 30° to 150°. Time-of-flight techniques and pulse-shape discrimination were employed for enhancing the neutron energy spectra and for reducing background. An overview of the experimental procedures and data analysis for the conversion of neutron yields to differential cross sections will be presented. These include the determination of the energy-dependent detection efficiencies, the normalization of the measured differential cross sections, and the attenuation and multiple scattering corrections. Our results will also be compared to evaluated cross section databases and reaction model calculations using the TALYS code. This work is supported by grants from the U.S. Department of Energy-Nuclear Energy Universities Program: NU-12-KY-UK-0201-05, and the Donald A. Cowan Physics Institute at the University of Dallas.

  15. CHEETAH: A fast thermochemical code for detonation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L.E.

    1993-11-01

    For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producingmore » a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.« less

  16. Integrating Geochemical Reactions with a Particle-Tracking Approach to Simulate Nitrogen Transport and Transformation in Aquifers

    NASA Astrophysics Data System (ADS)

    Cui, Z.; Welty, C.; Maxwell, R. M.

    2011-12-01

    Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in

  17. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE PAGES

    Kerby, Leslie M.; Mashnik, Stepan G.

    2015-05-14

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  18. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie M.; Mashnik, Stepan G.

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  19. Adding kinetics and hydrodynamics to the CHEETAH thermochemical code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L.E., Howard, W.M., Souers, P.C.

    1997-01-15

    In FY96 we released CHEETAH 1.40, which made extensive improvements on the stability and user friendliness of the code. CHEETAH now has over 175 users in government, academia, and industry. Efforts have also been focused on adding new advanced features to CHEETAH 2.0, which is scheduled for release in FY97. We have added a new chemical kinetics capability to CHEETAH. In the past, CHEETAH assumed complete thermodynamic equilibrium and independence of time. The addition of a chemical kinetic framework will allow for modeling of time-dependent phenomena, such as partial combustion and detonation in composite explosives with large reaction zones. Wemore » have implemented a Wood-Kirkwood detonation framework in CHEETAH, which allows for the treatment of nonideal detonations and explosive failure. A second major effort in the project this year has been linking CHEETAH to hydrodynamic codes to yield an improved HE product equation of state. We have linked CHEETAH to 1- and 2-D hydrodynamic codes, and have compared the code to experimental data. 15 refs., 13 figs., 1 tab.« less

  20. Development of a Monte Carlo code for the data analysis of the {sup 18}F(p,α){sup 15}O reaction at astrophysical energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, A.; Cherubini, S.; Spitaleri, C.

    Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called 'narrow systems' because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclearmore » fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of 'hot hydrogen burning' are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as {sup 13}N and {sup 18}F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of {sup 18}F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of {sup 18}F. Among these, the {sup 18}F(p,α){sup 15}O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the {sup 18}F(p,α){sup 15}O reaction, using a beam of {sup 18}F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo

  1. Drug metabolism and hypersensitivity reactions to drugs.

    PubMed

    Agúndez, José A G; Mayorga, Cristobalina; García-Martin, Elena

    2015-08-01

    The aim of the present review was to discuss recent advances supporting a role of drug metabolism, and particularly of the generation of reactive metabolites, in hypersensitivity reactions to drugs. The development of novel mass-spectrometry procedures has allowed the identification of reactive metabolites from drugs known to be involved in hypersensitivity reactions, including amoxicillin and nonsteroidal antiinflammatory drugs such as aspirin, diclofenac or metamizole. Recent studies demonstrated that reactive metabolites may efficiently bind plasma proteins, thus suggesting that drug metabolites, rather than - or in addition to - parent drugs, may elicit an immune response. As drug metabolic profiles are often determined by variability in the genes coding for drug-metabolizing enzymes, it is conceivable that an altered drug metabolism may predispose to the generation of reactive drug metabolites and hence to hypersensitivity reactions. These findings support the potential for the use of pharmacogenomics tests in hypersensitivity (type B) adverse reactions, in addition to the well known utility of these tests in type A adverse reactions. Growing evidence supports a link between genetically determined drug metabolism, altered metabolic profiles, generation of highly reactive metabolites and haptenization. Additional research is required to developing robust biomarkers for drug-induced hypersensitivity reactions.

  2. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  3. Open-source Framework for Storing and Manipulation of Plasma Chemical Reaction Data

    NASA Astrophysics Data System (ADS)

    Jenkins, T. G.; Averkin, S. N.; Cary, J. R.; Kruger, S. E.

    2017-10-01

    We present a new open-source framework for storage and manipulation of plasma chemical reaction data that has emerged from our in-house project MUNCHKIN. This framework consists of python scripts and C + + programs. It stores data in an SQL data base for fast retrieval and manipulation. For example, it is possible to fit cross-section data into most widely used analytical expressions, calculate reaction rates for Maxwellian distribution functions of colliding particles, and fit them into different analytical expressions. Another important feature of this framework is the ability to calculate transport properties based on the cross-section data and supplied distribution functions. In addition, this framework allows the export of chemical reaction descriptions in LaTeX format for ease of inclusion in scientific papers. With the help of this framework it is possible to generate corresponding VSim (Particle-In-Cell simulation code) and USim (unstructured multi-fluid code) input blocks with appropriate cross-sections.

  4. CHEETAH: A next generation thermochemical code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L.; Souers, P.

    1994-11-01

    CHEETAH is an effort to bring the TIGER thermochemical code into the 1990s. A wide variety of improvements have been made in Version 1.0. We have improved the robustness and ease of use of TIGER. All of TIGER`s solvers have been replaced by new algorithms. We find that CHEETAH solves a wider variety of problems with no user intervention (e.g. no guesses for the C-J state) than TIGER did. CHEETAH has been made simpler to use than TIGER; typical use of the code occurs with the new standard run command. CHEETAH will make the use of thermochemical codes more attractivemore » to practical explosive formulators. We have also made an extensive effort to improve over the results of TIGER. CHEETAH`s version of the BKW equation of state (BKWC) is able to accurately reproduce energies from cylinder tests; something that other BKW parameter sets have been unable to do. Calculations performed with BKWC execute very quickly; typical run times are under 10 seconds on a workstation. In the future we plan to improve the underlying science in CHEETAH. More accurate equations of state will be used in the gas and the condensed phase. A kinetics capability will be added to the code that will predict reaction zone thickness. Further ease of use features will eventually be added; an automatic formulator that adjusts concentrations to match desired properties is planned.« less

  5. Particle induced nuclear reaction calculations of Boron target nuclei

    NASA Astrophysics Data System (ADS)

    Tel, Eyyup; Sahan, Muhittin; Sarpün, Ismail Hakki; Kavun, Yusuf; Gök, Ali Armagan; Poyraz, Meltem

    2017-09-01

    Boron is usable element in many areas such as health, industry and energy. Especially, Boron neutron capture therapy (BNCT) is one of the medical applications. Boron target is irradiated with low energy thermal neutrons and at the end of reactions alpha particles occur. After this process recoiling lithium-7 nuclei is composed. In this study, charge particle induced nuclear reactions calculations of Boron target nuclei were investigated in the incident proton and alpha energy range of 5-50 MeV. The excitation functions for 10B target nuclei reactions have been calculated by using PCROSS Programming code. The semi-empirical calculations for (p,α) reactions have been done by using cross section formula with new coefficient obtained by Tel et al. The calculated results were compared with the experimental data from the literature.

  6. Ab initio Quantum Chemical and Experimental Reaction Kinetics Studies in the Combustion of Bipropellants

    DTIC Science & Technology

    2017-03-24

    NUMBER (Include area code) 24 March 2017 Briefing Charts 01 March 2017 - 31 March 2017 Ab initio Quantum Chemical and Experimental Reaction Kinetics...Laboratory AFRL/RQRS 1 Ara Road Edwards AFB, CA 93524 *Email: ghanshyam.vaghjiani@us.af.mil Ab initio Quantum Chemical and Experimental Reaction ...Clearance 17161 Zador et al., Prog. Energ. Combust. Sci., 37 371 (2011) Why Quantum Chemical Reaction Kinetics Studies? DISTRIBUTION A: Approved for

  7. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    DOE PAGES

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; ...

    2016-02-24

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involvingmore » carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.« less

  8. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    NASA Astrophysics Data System (ADS)

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.

    2016-06-01

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.

  9. STANDARD BIG BANG NUCLEOSYNTHESIS UP TO CNO WITH AN IMPROVED EXTENDED NUCLEAR NETWORK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coc, Alain; Goriely, Stephane; Xu, Yi

    Primordial or big bang nucleosynthesis (BBN) is one of the three strong pieces of evidence for the big bang model together with the expansion of the universe and cosmic microwave background radiation. In this study, we improve the standard BBN calculations taking into account new nuclear physics analyses and enlarge the nuclear network up to sodium. This is, in particular, important to evaluate the primitive value of CNO mass fraction that could affect Population III stellar evolution. For the first time we list the complete network of more than 400 reactions with references to the origin of the rates, includingmore » Almost-Equal-To 270 reaction rates calculated using the TALYS code. Together with the cosmological light elements, we calculate the primordial beryllium, boron, carbon, nitrogen, and oxygen nuclei. We performed a sensitivity study to identify the important reactions for CNO, {sup 9}Be, and boron nucleosynthesis. We re-evaluated those important reaction rates using experimental data and/or theoretical evaluations. The results are compared with precedent calculations: a primordial beryllium abundance increase by a factor of four compared to its previous evaluation, but we note a stability for B/H and for the CNO/H abundance ratio that remains close to its previous value of 0.7 Multiplication-Sign 10{sup -15}. On the other hand, the extension of the nuclear network has not changed the {sup 7}Li value, so its abundance is still 3-4 times greater than its observed spectroscopic value.« less

  10. The NATA code; theory and analysis. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Bade, W. L.; Yos, J. M.

    1975-01-01

    The NATA code is a computer program for calculating quasi-one-dimensional gas flow in axisymmetric nozzles and rectangular channels, primarily to describe conditions in electric archeated wind tunnels. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. The shear and heat flux on the nozzle wall are calculated and boundary layer displacement effects on the inviscid flow are taken into account. The program contains compiled-in thermochemical, chemical kinetic and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It calculates stagnation conditions on axisymmetric or two-dimensional models and conditions on the flat surface of a blunt wedge. Included in the report are: definitions of the inputs and outputs; precoded data on gas models, reactions, thermodynamic and transport properties of species, and nozzle geometries; explanations of diagnostic outputs and code abort conditions; test problems; and a user's manual for an auxiliary program (NOZFIT) used to set up analytical curvefits to nozzle profiles.

  11. Rates for neutron-capture reactions on tungsten isotopes in iron meteorites. [Abstract only

    NASA Technical Reports Server (NTRS)

    Masarik, J.; Reedy, R. C.

    1994-01-01

    High-precision W isotopic analyses by Harper and Jacobsen indicate the W-182/W-183 ratio in the Toluca iron meteorite is shifted by -(3.0 +/- 0.9) x 10(exp -4) relative to a terrestrial standard. Possible causes of this shift are neutron-capture reactions on W during Toluca's approximately 600-Ma exposure to cosmic ray particles or radiogenic growth of W-182 from 9-Ma Hf-182 in the silicate portion of the Earth after removal of W to the Earth's core. Calculations for the rates of neutron-capture reactions on W isotopes were done to study the first possibility. The LAHET Code System (LCS) which consists of the Los Alamos High Energy Transport (LAHET) code and the Monte Carlo N-Particle(MCNP) transport code was used to numerically simulate the irradiation of the Toluca iron meteorite by galactic-cosmic-ray (GCR) particles and to calculate the rates of W(n, gamma) reactions. Toluca was modeled as a 3.9-m-radius sphere with the composition of a typical IA iron meteorite. The incident GCR protons and their interactions were modeled with LAHET, which also handled the interactions of neutrons with energies above 20 MeV. The rates for the capture of neutrons by W-182, W-183, and W-186 were calculated using the detailed library of (n, gamma) cross sections in MCNP. For this study of the possible effect of W(n, gamma) reactions on W isotope systematics, we consider the peak rates. The calculated maximum change in the normalized W-182/W-183 ratio due to neutron-capture reactions cannot account for more than 25% of the mass 182 deficit observed in Toluca W.

  12. Characterization of plastic deformation and chemical reaction in titanium-polytetrafluoroethylene mixture

    NASA Astrophysics Data System (ADS)

    Davis, Jeffery Jon

    1998-09-01

    The subject of this dissertation is the deformation process of a single metal - polymer system (titanium - polytetrafluoroethylene) and how this process leads to initiation of chemical reaction. Several different kinds of experiments were performed to characterize the behavior of this material to shock and impact. These mechanical conditions induce a rapid plastic deformation of the sample. All of the samples tested had an initial porosity which increased the plastic flow condition. It is currently believed that during the deformation process two important conditions occur: removal of the oxide layer from the metal and decomposition of the polymer. These conditions allow for rapid chemical reaction. The research from this dissertation has provided insight into the complex behavior of plastic deformation and chemical reactions in titanium - polytetrafluoroethylene (PTFE, Teflon). A hydrodynamic computational code was used to model the plastic flow for correlation with the results from the experiments. The results from this work are being used to develop an ignition and growth model for metal/polymer systems. Three sets of experiments were used to examine deformation of the 80% Ti and 20% Teflon materials: drop- weight, gas gun, and split-Hopkinson pressure bar. Recovery studies included post shot analysis of the samples using x-ray diffraction. Lagrangian hydrocode DYNA2D modeling of the drop-weight tests was performed for comparison with experiments. One of the reactions know to occur is Ti + C → TiC (s) which results in an exothermic release. However, the believed initial reactions occur between Ti and fluorine which produces TixFy gases. The thermochemical code CHEETAH was used to investigate the detonation products and concentrations possible during Ti - Teflon reaction. CHEETAH shows that the Ti - fluorine reactions are thermodynamically favorable. This research represents the most comprehensive to date study of deformation induced chemical reaction in metal/polymers.

  13. Self-complementary circular codes in coding theory.

    PubMed

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  14. Isomeric ratio measurements for the radiative neutron capture 176Lu(n,γ) at DANCE

    NASA Astrophysics Data System (ADS)

    Denis-Petit, D.; Roig, O.; Méot, V.; Morillon, B.; Romain, P.; Jandel, M.; Kawano, T.; Vieira, D. J.; Bond, E. M.; Bredeweg, T. A.; Couture, A. J.; Haight, R. C.; Keksis, A. L.; Rundberg, R. S.; Ullmann, J. L.

    2017-09-01

    The isomeric ratios for the neutron capture reaction 176Lu(n,γ) to the Jπ = 5/2-, 761.7 keV, T1/2 = 32.8 ns and the Jπ = 15/2+, 1356.9 keV, T1/2 = 11.1 ns levels of 177Lu, have been measured for the first time with the Detector for Advanced Neutron Capture Experiments (DANCE) at the Los Alamos National Laboratory. These measured isomeric ratios are compared with TALYS calculations.

  15. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  16. Combinatorial neural codes from a mathematical coding theory perspective.

    PubMed

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  17. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  18. Experimental studies related to the origin of the genetic code and the process of protein synthesis - A review

    NASA Technical Reports Server (NTRS)

    Lacey, J. C., Jr.; Mullins, D. W., Jr.

    1983-01-01

    A survey is presented of the literature on the experimental evidence for the genetic code assignments and the chemical reactions involved in the process of protein synthesis. In view of the enormous number of theoretical models that have been advanced to explain the origin of the genetic code, attention is confined to experimental studies. Since genetic coding has significance only within the context of protein synthesis, it is believed that the problem of the origin of the code must be dealt with in terms of the origin of the process of protein synthesis. It is contended that the answers must lie in the nature of the molecules, amino acids and nucleotides, the affinities they might have for one another, and the effect that those affinities must have on the chemical reactions that are related to primitive protein synthesis. The survey establishes that for the bulk of amino acids, there is a direct and significant correlation between the hydrophobicity rank of the amino acids and the hydrophobicity rank of their anticodonic dinucleotides.

  19. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  20. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    DOE PAGES

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; ...

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  1. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  2. Measurements of neutron capture cross sections on 70Zn at 0.96 and 1.69 MeV

    NASA Astrophysics Data System (ADS)

    Punte, L. R. M.; Lalremruata, B.; Otuka, N.; Suryanarayana, S. V.; Iwamoto, Y.; Pachuau, Rebecca; Satheesh, B.; Thanga, H. H.; Danu, L. S.; Desai, V. V.; Hlondo, L. R.; Kailas, S.; Ganesan, S.; Nayak, B. K.; Saxena, A.

    2017-02-01

    The cross sections of the 70Zn(n ,γ )Zn71m (T1 /2=3.96 ±0.05 -h ) reaction have been measured relative to the 197Au(n ,γ )198Au cross sections at 0.96 and 1.69 MeV using a 7Li(p ,n )7Be neutron source and activation technique. The cross section of this reaction has been measured for the first time in the MeV region. The new experimental cross sections have been compared with the theoretical prediction by talys-1.6 with various level-density models and γ -ray strength functions as well as the tendl-2015 library. The talys-1.6 calculation with the generalized superfluid level-density model and Kopecky-Uhl generalized Lorentzian γ -ray strength function predicted the new experimental cross sections at both incident energies. The 70Zn(n ,γ ) g+m 71Zn total capture cross sections have also been derived by applying the evaluated isomeric ratios in the tendl-2015 library to the measured partial capture cross sections. The spectrum averaged total capture cross sections derived in the present paper agree well with the jendl-4.0 library at 0.96 MeV, whereas it lies between the tendl-2015 and the jendl-4.0 libraries at 1.69 MeV.

  3. Information coding with frequency of oscillations in Belousov-Zhabotinsky encapsulated disks

    NASA Astrophysics Data System (ADS)

    Gorecki, J.; Gorecka, J. N.; Adamatzky, Andrew

    2014-04-01

    Information processing with an excitable chemical medium, like the Belousov-Zhabotinsky (BZ) reaction, is typically based on information coding in the presence or absence of excitation pulses. Here we present a new concept of Boolean coding that can be applied to an oscillatory medium. A medium represents the logical TRUE state if a selected region oscillates with a high frequency. If the frequency fails below a specified value, it represents the logical FALSE state. We consider a medium composed of disks encapsulating an oscillatory mixture of reagents, as related to our recent experiments with lipid-coated BZ droplets. We demonstrate that by using specific geometrical arrangements of disks containing the oscillatory medium one can perform logical operations on variables coded in oscillation frequency. Realizations of a chemical signal diode and of a single-bit memory with oscillatory disks are also discussed.

  4. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  5. Computational methods for diffusion-influenced biochemical reactions.

    PubMed

    Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G

    2007-08-01

    We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/

  6. Study of (n,2n) reaction on 191,193Ir isotopes and isomeric cross section ratios

    NASA Astrophysics Data System (ADS)

    Vlastou, R.; Kalamara, A.; Kokkoris, M.; Patronis, N.; Serris, M.; Georgoulakis, M.; Hassapoglou, S.; Kobothanasis, K.; Axiotis, M.; Lagoyannis, A.

    2017-09-01

    The cross section of 191Ir(n,2n)190Irg+m1 and 191Ir(n,2n)190Irm2 reactions has been measured at 17.1 and 20.9 MeV neutron energies at the 5.5 MV tandem T11/25 Accelerator Laboratory of NCSR "Demokritos", using the activation method. The neutron beams were produced by means of the 3H(d,n)4He reaction at a flux of the order of 2 × 105 n/cm2s. The neutron flux has been deduced implementing the 27Al(n,α) reaction, while the flux variation of the neutron beam was monitored by using a BF3 detector. The 193Ir(n,2n)192Ir reaction cross section has also been determined, taking into account the contribution from the contaminant 191Ir(n,γ)192Ir reaction. The correction method is based on the existing data in ENDF for the contaminant reaction, convoluted with the neutron spectra which have been extensively studied by means of simulations using the NeusDesc and MCNP codes. Statistical model calculations using the code EMPIRE 3.2.2 and taking into account pre-equilibrium emission, have been performed on the data measured in this work as well as on data reported in literature.

  7. Translational resistivity/conductivity of coding sequences during exponential growth of Escherichia coli.

    PubMed

    Takai, Kazuyuki

    2017-01-21

    Codon adaptation index (CAI) has been widely used for prediction of expression of recombinant genes in Escherichia coli and other organisms. However, CAI has no mechanistic basis that rationalizes its application to estimation of translational efficiency. Here, I propose a model based on which we could consider how codon usage is related to the level of expression during exponential growth of bacteria. In this model, translation of a gene is considered as an analog of electric current, and an analog of electric resistance corresponding to each gene is considered. "Translational resistance" is dependent on the steady-state concentration and the sequence of the mRNA species, and "translational resistivity" is dependent only on the mRNA sequence. The latter is the sum of two parts: one is the resistivity for the elongation reaction (coding sequence resistivity), and the other comes from all of the other steps of the decoding reaction. This electric circuit model clearly shows that some conditions should be met for codon composition of a coding sequence to correlate well with its expression level. On the other hand, I calculated relative frequency of each of the 61 sense codon triplets translated during exponential growth of E. coli from a proteomic dataset covering over 2600 proteins. A tentative method for estimating relative coding sequence resistivity based on the data is presented. Copyright © 2016. Published by Elsevier Ltd.

  8. Eye coding mechanisms in early human face event-related potentials.

    PubMed

    Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G

    2014-11-10

    In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.

  9. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  10. Volume I: fluidized-bed code documentation, for the period February 28, 1983-March 18, 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piperopoulou, H.; Finson, M.; Bloomfield, D.

    1983-03-01

    This documentation supersedes the previous documentation of the Fluidized-Bed Gasifier code. Volume I documents a simulation program of a Fluidized-Bed Gasifier (FBG), and Volume II documents a systems model of the FBG. The FBG simulation program is an updated version of the PSI/FLUBED code which is capable of modeling slugging beds and variable bed diameter. In its present form the code is set up to model a Westinghouse commercial scale gasifier. The fluidized bed gasifier model combines the classical bubbling bed description for the transport and mixing processes with PSI-generated models for coal chemistry. At the distributor plate, the bubblemore » composition is that of the inlet gas and the initial bubble size is set by the details of the distributor plate. Bubbles grow by coalescence as they rise. The bubble composition and temperature change with height due to transport to and from the cloud as well as homogeneous reactions within the bubble. The cloud composition also varies with height due to cloud/bubble exchange, cloud/emulsion, exchange, and heterogeneous coal char reactions. The emulsion phase is considered to be well mixed.« less

  11. An incomplete assembly with thresholding algorithm for systems of reaction-diffusion equations in three space dimensions IAT for reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2003-07-01

    Solving systems of reaction-diffusion equations in three space dimensions can be prohibitively expensive both in terms of storage and CPU time. Herein, I present a new incomplete assembly procedure that is designed to reduce storage requirements. Incomplete assembly is analogous to incomplete factorization in that only a fixed number of nonzero entries are stored per row and a drop tolerance is used to discard small values. The algorithm is incorporated in a finite element method-of-lines code and tested on a set of reaction-diffusion systems. The effect of incomplete assembly on CPU time and storage and on the performance of the temporal integrator DASPK, algebraic solver GMRES and preconditioner ILUT is studied.

  12. Pycellerator: an arrow-based reaction-like modelling language for biological simulations.

    PubMed

    Shapiro, Bruce E; Mjolsness, Eric

    2016-02-15

    We introduce Pycellerator, a Python library for reading Cellerator arrow notation from standard text files, conversion to differential equations, generating stand-alone Python solvers, and optionally running and plotting the solutions. All of the original Cellerator arrows, which represent reactions ranging from mass action, Michales-Menten-Henri (MMH) and Gene-Regulation (GRN) to Monod-Wyman-Changeaux (MWC), user defined reactions and enzymatic expansions (KMech), were previously represented with the Mathematica extended character set. These are now typed as reaction-like commands in ASCII text files that are read by Pycellerator, which includes a Python command line interface (CLI), a Python application programming interface (API) and an iPython notebook interface. Cellerator reaction arrows are now input in text files. The arrows are parsed by Pycellerator and translated into differential equations in Python, and Python code is automatically generated to solve the system. Time courses are produced by executing the auto-generated Python code. Users have full freedom to modify the solver and utilize the complete set of standard Python tools. The new libraries are completely independent of the old Cellerator software and do not require Mathematica. All software is available (GPL) from the github repository at https://github.com/biomathman/pycellerator/releases. Details, including installation instructions and a glossary of acronyms and terms, are given in the Supplementary information. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  14. Comparison with simulations to experimental data for photo-neutron reactions using SPring-8 Injector

    NASA Astrophysics Data System (ADS)

    Asano, Yoshihiro

    2017-09-01

    Simulations of photo-nuclear reactions by using Monte Carlo codes PHITS and FLUKA have been performed to compare to the measured data at the SPring-8 injector with 250MeV and 961MeV electrons. Measurement data of Bismuth-206 productions due to photo-nuclear reactions of 209Bi(γ,3n) 206Bi and high energy neutron reactions of 209Bi(n,4n)206 Bi at the beam dumps have been compared with the simulations. Neutron leakage spectra outside the shield wall are also compared between experiments and simulations.

  15. Fatal anaphylaxis registries data support changes in the who anaphylaxis mortality coding rules.

    PubMed

    Tanno, Luciana Kase; Simons, F Estelle R; Annesi-Maesano, Isabella; Calderon, Moises A; Aymé, Ségolène; Demoly, Pascal

    2017-01-13

    Anaphylaxis is defined as a severe life-threatening generalized or systemic hypersensitivity reaction. The difficulty of coding anaphylaxis fatalities under the World Health Organization (WHO) International Classification of Diseases (ICD) system is recognized as an important reason for under-notification of anaphylaxis deaths. On current death certificates, a limited number of ICD codes are valid as underlying causes of death, and death certificates do not include the word anaphylaxis per se. In this review, we provide evidences supporting the need for changes in WHO mortality coding rules and call for addition of anaphylaxis as an underlying cause of death on international death certificates. This publication will be included in support of a formal request to the WHO as a formal request for this move taking the 11 th ICD revision.

  16. Rate-compatible punctured convolutional codes (RCPC codes) and their applications

    NASA Astrophysics Data System (ADS)

    Hagenauer, Joachim

    1988-04-01

    The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.

  17. Chromium and titanium isotopes produced in photonuclear reactions of vanadium, revisited

    NASA Astrophysics Data System (ADS)

    Sakamoto, K.; Yoshida, M.; Kubota, Y.; Fukasawa, T.; Kunugise, A.; Hamajima, Y.; Shibata, S.; Fujiwara, I.

    1989-10-01

    Photonuclear production yields of 51Ti und 51,49,48Cr from 51V were redetermined for bremsstrahlung end-point energies ( E0) of 30 to 1000 or 1050 MeV with the aid of radiochemical separation of Cr. The yield curves for 51Ti, 51Cr, 49Cr and 48Cr show a clear evidence for two components in the production process; one tor secondary-proton reactions at E0 < Qπ and the other for photopion reactions, at E0 > Q, Qπ being Q-values for (γ, π +) and ( γ, π+xn) reactions. The contributions of the secondary reactions for production of the Ti and Cr isotopes at E0 > Qπ were then estimated by fitting calculated secondary yields to the observed ones at E0 < Qπ, and found to be about 40%, 20%, 4% and 4% for 51Ti, 51Cr, 49Cr and 48Cr, respectively, at E0 = 400 to 1000 MeV. The calculation of the secondary yields was based on the excitation functions for 51V(n, p) and (p, x'n) calculated with the ALICE code and the reported photoneutron and photoproton spectra from 12C and some other complex nuclei. The present results for 49Cr are close to the reported ones, while the present 48Cr yields differ by a factor of about 50. For the 51Ti and 51Cr yields, there are some discrepancies between the present and reported ones. The yield corrected for the secondaries, in units of μb/equivalent quantum, were unfolded into cross sections per photon, in units of μb, as a function ol monochromatic photon energy with the LOUHI-82 code. The results for the 51Ti and 49Cr are in disagreement in both the magnitude and shape with the theoretical predictions based on DWIA and PWIA. A Monte Carlo calculation based on the PICA code by Gabriel and Alsmiller does reproduce the gross feature of the present results.

  18. Amino acid codes in mitochondria as possible clues to primitive codes

    NASA Technical Reports Server (NTRS)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  19. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  20. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  1. GLSENS: A Generalized Extension of LSENS Including Global Reactions and Added Sensitivity Analysis for the Perfectly Stirred Reactor

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1996-01-01

    A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.

  2. Nuclear Forensics and Radiochemistry: Reaction Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rundberg, Robert S.

    In the intense neutron flux of a nuclear explosion the production of isotopes may occur through successive neutron induced reactions. The pathway to these isotopes illustrates both the complexity of the problem and the need for high quality nuclear data. The growth and decay of radioactive isotopes can follow a similarly complex network. The Bateman equation will be described and modified to apply to the transmutation of isotopes in a high flux reactor. A alternative model of growth and decay, the GD code, that can be applied to fission products will also be described.

  3. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  4. Chemical computing with reaction-diffusion processes.

    PubMed

    Gorecki, J; Gizynski, K; Guzowski, J; Gorecka, J N; Garstecki, P; Gruenert, G; Dittrich, P

    2015-07-28

    Chemical reactions are responsible for information processing in living organisms. It is believed that the basic features of biological computing activity are reflected by a reaction-diffusion medium. We illustrate the ideas of chemical information processing considering the Belousov-Zhabotinsky (BZ) reaction and its photosensitive variant. The computational universality of information processing is demonstrated. For different methods of information coding constructions of the simplest signal processing devices are described. The function performed by a particular device is determined by the geometrical structure of oscillatory (or of excitable) and non-excitable regions of the medium. In a living organism, the brain is created as a self-grown structure of interacting nonlinear elements and reaches its functionality as the result of learning. We discuss whether such a strategy can be adopted for generation of chemical information processing devices. Recent studies have shown that lipid-covered droplets containing solution of reagents of BZ reaction can be transported by a flowing oil. Therefore, structures of droplets can be spontaneously formed at specific non-equilibrium conditions, for example forced by flows in a microfluidic reactor. We describe how to introduce information to a droplet structure, track the information flow inside it and optimize medium evolution to achieve the maximum reliability. Applications of droplet structures for classification tasks are discussed. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  6. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  7. An Overview of the XGAM Code and Related Software for Gamma-ray Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, W.

    2014-11-13

    The XGAM spectrum-fitting code and associated software were developed specifically to analyze the complex gamma-ray spectra that can result from neutron-induced reactions. The XGAM code is designed to fit a spectrum over the entire available gamma-ray energy range as a single entity, in contrast to the more traditional piecewise approaches. This global-fit philosophy enforces background continuity as well as consistency between local and global behavior throughout the spectrum, and in a natural way. This report presents XGAM and the suite of programs built around it with an emphasis on how they fit into an overall analysis methodology for complex gamma-raymore » data. An application to the analysis of time-dependent delayed gamma-ray yields from 235U fission is shown in order to showcase the codes and how they interact.« less

  8. QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation

    NASA Astrophysics Data System (ADS)

    Samana, A. R.; Krmpotić, F.; Bertulani, C. A.

    2010-06-01

    A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.

  9. A systematic review of validated methods for identifying hypersensitivity reactions other than anaphylaxis (fever, rash, and lymphadenopathy), using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of hypersensitivity reactions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the hypersensitivity reactions of health outcomes of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify hypersensitivity reactions and including validation estimates of the coding algorithms. We identified five studies that provided validated hypersensitivity-reaction algorithms. Algorithm positive predictive values (PPVs) for various definitions of hypersensitivity reactions ranged from 3% to 95%. PPVs were high (i.e. 90%-95%) when both exposures and diagnoses were very specific. PPV generally decreased when the definition of hypersensitivity was expanded, except in one study that used data mining methodology for algorithm development. The ability of coding algorithms to identify hypersensitivity reactions varied, with decreasing performance occurring with expanded outcome definitions. This examination of hypersensitivity-reaction coding algorithms provides an example of surveillance bias resulting from outcome definitions that include mild cases. Data mining may provide tools for algorithm development for hypersensitivity and other health outcomes. Research needs to be conducted on designing validation studies to test hypersensitivity-reaction algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Effective dose to immuno-PET patients due to metastable impurities in cyclotron produced zirconium-89

    NASA Astrophysics Data System (ADS)

    Alfuraih, Abdulrahman; Alzimami, Khalid; Ma, Andy K.; Alghamdi, Ali; Al Jammaz, Ibrahim

    2014-11-01

    Immuno-PET is a nuclear medicine technique that combines positron emission tommography (PET) with radio-labeled monoclonal antibodies (mAbs) for tumor characterization and therapy. Zirconium-89 (89Zr) is an emerging radionuclide for immuno-PET imaging. Its long half-life (78.4 h) gives ample time for the production, the administering and the patient uptake of the tagged radiopharmaceutical. Furthermore, the nuclides will remain in the tumor cells after the mAbs are catabolized so that time series studies are possible without incurring further administration of radiopharmarceuticals. 89Zr can be produced in medical cyclotrons by bombarding an yttrium-89 (89Y) target with a proton beam through the 89Y(p,n)89Zr reaction. In this study, we estimated the effective dose to the head and neck cancer patients undergoing 89Zr-based immune-PET procedures. The production of 89Zr and the impurities from proton irradiation of the 89Y target in a cyclotron was calculated with the Monte Carlo code MCNPX and the nuclear reaction code TALYS. The cumulated activities of the Zr isotopes were derived from real patient data in literature and the effective doses were estimated using the MIRD specific absorbed fraction formalism. The estimated effective dose from 89Zr is 0.5±0.2 mSv/MBq. The highest organ dose is 1.8±0.2 mSv/MBq in the liver. These values are in agreement with those reported in literature. The effective dose from 89mZr is about 0.2-0.3% of the 89Zr dose in the worst case. Since the ratio of 89mZr to 89Zr depends on the cooling time as well as the irradiation details, contaminant dose estimation is an important aspect in optimizing the cyclotron irradiation geometry, energy and time.

  11. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  12. New quantum codes derived from a family of antiprimitive BCH codes

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  13. The use of the SRIM code for calculation of radiation damage induced by neutrons

    NASA Astrophysics Data System (ADS)

    Mohammadi, A.; Hamidi, S.; Asadabad, Mohsen Asadi

    2017-12-01

    Materials subjected to neutron irradiation will being evolve to structural changes by the displacement cascades initiated by nuclear reaction. This study discusses a methodology to compute primary knock-on atoms or PKAs information that lead to radiation damage. A program AMTRACK has been developed for assessing of the PKAs information. This software determines the specifications of recoil atoms (using PTRAC card of MCNPX code) and also the kinematics of interactions. The deterministic method was used for verification of the results of (MCNPX+AMTRACK). The SRIM (formely TRIM) code is capable to compute neutron radiation damage. The PKAs information was extracted by AMTRACK program, which can be used as an input of SRIM codes for systematic analysis of primary radiation damage. Then the Bushehr Nuclear Power Plant (BNPP) radiation damage on reactor pressure vessel is calculated.

  14. Surface acoustic wave coding for orthogonal frequency coded devices

    NASA Technical Reports Server (NTRS)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  15. Students' Views and Attitudes Towards the Communication Code Used in Press Articles about Science

    ERIC Educational Resources Information Center

    Halkia, Krystallia; Mantzouridis, Dimitris

    2005-01-01

    The present research was designed to investigate the reaction of secondary school students to the communication code that the press uses in science articles: it attempts to trace which communication techniques can be of potential use in science education. The sample of the research consists of 351 secondary school students. The research instrument…

  16. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    ERIC Educational Resources Information Center

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  17. Fission time scale from pre-scission neutron and α multiplicities in the 16O + 194Pt reaction

    NASA Astrophysics Data System (ADS)

    Kapoor, K.; Verma, S.; Sharma, P.; Mahajan, R.; Kaur, N.; Kaur, G.; Behera, B. R.; Singh, K. P.; Kumar, A.; Singh, H.; Dubey, R.; Saneesh, N.; Jhingan, A.; Sugathan, P.; Mohanto, G.; Nayak, B. K.; Saxena, A.; Sharma, H. P.; Chamoli, S. K.; Mukul, I.; Singh, V.

    2017-11-01

    Pre- and post-scission α -particle multiplicities have been measured for the reaction 16O+P194t at 98.4 MeV forming R210n compound nucleus. α particles were measured at various angles in coincidence with the fission fragments. Moving source technique was used to extract the pre- and post-scission contributions to the particle multiplicity. Study of the fission mechanism using the different probes are helpful in understanding the detailed reaction dynamics. The neutron multiplicities for this reaction have been reported earlier. The multiplicities of neutrons and α particles were reproduced using standard statistical model code joanne2 by varying the transient (τt r) and saddle to scission (τs s c) times. This code includes deformation dependent-particle transmission coefficients, binding energies and level densities. Fission time scales of the order of 50-65 ×10-21 s are required to reproduce the neutron and α -particle multiplicities.

  18. QR Codes 101

    ERIC Educational Resources Information Center

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  19. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  20. Making your code citable with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2016-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.

  1. 76 FR 77549 - Lummi Nation-Title 20-Code of Laws-Liquor Code

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Lummi Nation--Title 20--Code of Laws--Liquor... amendment to Lummi Nation's Title 20--Code of Laws--Liquor Code. The Code regulates and controls the... this amendment to Title 20--Lummi Nation Code of Laws--Liquor Code by Resolution 2011-038 on March 1...

  2. Nuclear model calculation and targetry recipe for production of 110mIn.

    PubMed

    Kakavand, T; Mirzaii, M; Eslami, M; Karimi, A

    2015-10-01

    (110m)In is potentially an important positron emitting that can be used in positron emission tomography. In this work, the excitation functions and production yields of (110)Cd(d, 2n), (111)Cd(d, 3n), (nat)Cd(d, xn), (110)Cd(p, n), (111)Cd(p, 2n), (112)Cd(p, 3n) and (nat)Cd(p, xn) reactions to produce the (110m)In were calculated using nuclear model code TALYS and compared with the experimental data. The yield of isomeric state production of (110)In was also compared with ground state production ones to reach the optimal energy range of projectile for the high yield production of metastable state. The results indicate that the (110)Cd(p, n)(110m)In is a high yield reaction with an isomeric ratio (σ(m)/σ(g)) of about 35 within the optimal incident energy range of 15-5 MeV. To make the target, cadmium was electroplated on a copper substrate in varying electroplating conditions such as PH, DC current density, temperature and time. A set of cold tests were also performed on the final sample under several thermal shocks to verify target resistance. The best electroplated cadmium target was irradiated with 15 MeV protons at current of 100 µA for one hour and the production yield of (110m)In and other byproducts were measured. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Evaluation of proton cross-sections for radiation sources in the proton accelerator

    NASA Astrophysics Data System (ADS)

    Cho, Young-Sik; Lee, Cheol-Woo; Lee, Young-Ouk

    2007-08-01

    Proton Engineering Frontier Project (PEFP) is currently building a proton accelerator in Korea which consists of a proton linear accelerator with 100 MeV of energy, 20 mA of current and various particle beam facilities. The final goal of this project consists of the production of 1 GeV proton beams, which will be used for various medical and industrial applications as well as for research in basic and applied sciences. Carbon and copper in the proton accelerator for PEPP, through activation, become radionuclides such as 7Be and 64Cu. Copper is a major element of the accelerator components and the carbon is planned to be used as a target material of the beam dump. A recent survey showed that the currently available cross-sections create a large difference from the experimental data in the production of some residual nuclides by the proton-induced reactions for carbon and copper. To more accurately estimate the production of radioactive nuclides in the accelerator, proton cross-sections for carbon and copper are evaluated. The TALYS code was used for the evaluation of the cross-sections for the proton-induced reactions. To obtain the cross-sections which best fits the experimental data, optical model parameters for the neutron, proton and other complex particles such as the deuteron and alpha were successively adjusted. The evaluated cross-sections in this study are compared with the measurements and other evaluations .

  4. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Gati, Frank; Yuko, James R.; Motil, Brian J.; Lumpkin, Forrest E.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module showed that thermal protection is necessary because of significant heating from the plume.

  5. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.

  6. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.

  7. Aerodynamic Interference Due to MSL Reaction Control System

    NASA Technical Reports Server (NTRS)

    Dyakonov, Artem A.; Schoenenberger, Mark; Scallion, William I.; VanNorman, John W.; Novak, Luke A.; Tang, Chun Y.

    2009-01-01

    An investigation of effectiveness of the reaction control system (RCS) of Mars Science Laboratory (MSL) entry capsule during atmospheric flight has been conducted. The reason for the investigation is that MSL is designed to fly a lifting actively guided entry with hypersonic bank maneuvers, therefore an understanding of RCS effectiveness is required. In the course of the study several jet configurations were evaluated using Langley Aerothermal Upwind Relaxation Algorithm (LAURA) code, Data Parallel Line Relaxation (DPLR) code, Fully Unstructured 3D (FUN3D) code and an Overset Grid Flowsolver (OVERFLOW) code. Computations indicated that some of the proposed configurations might induce aero-RCS interactions, sufficient to impede and even overwhelm the intended control torques. It was found that the maximum potential for aero-RCS interference exists around peak dynamic pressure along the trajectory. Present analysis largely relies on computational methods. Ground testing, flight data and computational analyses are required to fully understand the problem. At the time of this writing some experimental work spanning range of Mach number 2.5 through 4.5 has been completed and used to establish preliminary levels of confidence for computations. As a result of the present work a final RCS configuration has been designed such as to minimize aero-interference effects and it is a design baseline for MSL entry capsule.

  8. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  9. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  10. Analysis of Effectiveness of Phoenix Entry Reaction Control System

    NASA Technical Reports Server (NTRS)

    Dyakonov, Artem A.; Glass, Christopher E.; Desai, Prasun, N.; VanNorman, John W.

    2008-01-01

    Interaction between the external flowfield and the reaction control system (RCS) thruster plumes of the Phoenix capsule during entry has been investigated. The analysis covered rarefied, transitional, hypersonic and supersonic flight regimes. Performance of pitch, yaw and roll control authority channels was evaluated, with specific emphasis on the yaw channel due to its low nominal yaw control authority. Because Phoenix had already been constructed and its RCS could not be modified before flight, an assessment of RCS efficacy along the trajectory was needed to determine possible issues and to make necessary software changes. Effectiveness of the system at various regimes was evaluated using a hybrid DSMC-CFD technique, based on DSMC Analysis Code (DAC) code and General Aerodynamic Simulation Program (GASP), the LAURA (Langley Aerothermal Upwind Relaxation Algorithm) code, and the FUN3D (Fully Unstructured 3D) code. Results of the analysis at hypersonic and supersonic conditions suggest a significant aero-RCS interference which reduced the efficacy of the thrusters and could likely produce control reversal. Very little aero-RCS interference was predicted in rarefied and transitional regimes. A recommendation was made to the project to widen controller system deadbands to minimize (if not eliminate) the use of RCS thrusters through hypersonic and supersonic flight regimes, where their performance would be uncertain.

  11. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species with Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for near-equilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion.

  12. Quartz crystal microbalance detection of DNA single-base mutation based on monobase-coded cadmium tellurium nanoprobe.

    PubMed

    Zhang, Yuqin; Lin, Fanbo; Zhang, Youyu; Li, Haitao; Zeng, Yue; Tang, Hao; Yao, Shouzhuo

    2011-01-01

    A new method for the detection of point mutation in DNA based on the monobase-coded cadmium tellurium nanoprobes and the quartz crystal microbalance (QCM) technique was reported. A point mutation (single-base, adenine, thymine, cytosine, and guanine, namely, A, T, C and G, mutation in DNA strand, respectively) DNA QCM sensor was fabricated by immobilizing single-base mutation DNA modified magnetic beads onto the electrode surface with an external magnetic field near the electrode. The DNA-modified magnetic beads were obtained from the biotin-avidin affinity reaction of biotinylated DNA and streptavidin-functionalized core/shell Fe(3)O(4)/Au magnetic nanoparticles, followed by a DNA hybridization reaction. Single-base coded CdTe nanoprobes (A-CdTe, T-CdTe, C-CdTe and G-CdTe, respectively) were used as the detection probes. The mutation site in DNA was distinguished by detecting the decreases of the resonance frequency of the piezoelectric quartz crystal when the coded nanoprobe was added to the test system. This proposed detection strategy for point mutation in DNA is proved to be sensitive, simple, repeatable and low-cost, consequently, it has a great potential for single nucleotide polymorphism (SNP) detection. 2011 © The Japan Society for Analytical Chemistry

  13. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines

    USGS Publications Warehouse

    Plummer, Niel; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.

    1988-01-01

    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  14. Lagrangian simulation of mixing and reactions in complex geochemical systems

    NASA Astrophysics Data System (ADS)

    Engdahl, Nicholas B.; Benson, David A.; Bolster, Diogo

    2017-04-01

    Simulations of detailed geochemical systems have traditionally been restricted to Eulerian reactive transport algorithms. This note introduces a Lagrangian method for modeling multicomponent reaction systems. The approach uses standard random walk-based methods for the particle motion steps but allows the particles to interact with each other by exchanging mass of their various chemical species. The colocation density of each particle pair is used to calculate the mass transfer rate, which creates a local disequilibrium that is then relaxed back toward equilibrium using the reaction engine PhreeqcRM. The mass exchange is the only step where the particles interact and the remaining transport and reaction steps are entirely independent for each particle. Several validation examples are presented, which reproduce well-known analytical solutions. These are followed by two demonstration examples of a competitive decay chain and an acid-mine drainage system. The source code, entitled Complex Reaction on Particles (CRP), and files needed to run these examples are hosted openly on GitHub (https://github.com/nbengdahl/CRP), so as to enable interested readers to readily apply this approach with minimal modifications.

  15. Mechanical code comparator

    DOEpatents

    Peter, Frank J.; Dalton, Larry J.; Plummer, David W.

    2002-01-01

    A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.

  16. The EDIT-COMGEOM Code

    DTIC Science & Technology

    1975-09-01

    This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code

  17. Increasing insect reactions in Alaska: is this related to changing climate?

    PubMed

    Demain, Jeffrey G; Gessner, Bradford D; McLaughlin, Joseph B; Sikes, Derek S; Foote, J Timothy

    2009-01-01

    In 2006, Fairbanks, AK, reported its first cases of fatal anaphylaxis as a result of Hymenoptera stings concurrent with an increase in insect reactions observed throughout the state. This study was designed to determine whether Alaska medical visits for insect reactions have increased. We conducted a retrospective review of three independent patient databases in Alaska to identify trends of patients seeking medical care for adverse reactions after insect-related events. For each database, an insect reaction was defined as a claim for the International Classification of Diseases, Ninth Edition (ICD-9), codes E9053, E906.4, and 989.5. Increases in insect reactions in each region were compared with temperature changes in the same region. Each database revealed a statistically significant trend in patients seeking care for insect reactions. Fairbanks Memorial Hospital Emergency Department reported a fourfold increase in patients in 2006 compared with previous years (1992-2005). The Allergy, Asthma, and Immunology Center of Alaska reported a threefold increase in patients from 1999 to 2002 to 2003 to 2007. A retrospective review of the Alaska Medicaid database from 1999 to 2006 showed increases in medical claims for insect reactions among all regions, with the largest percentage of increases occurring in the most northern areas. Increases in insect reactions in Alaska have occurred after increases in annual and winter temperatures, and these findings may be causally related.

  18. Surface code implementation of block code state distillation.

    PubMed

    Fowler, Austin G; Devitt, Simon J; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved [formula: see text] state given 15 input copies. New block code state distillation methods can produce k improved [formula: see text] states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.

  19. Surface code implementation of block code state distillation

    PubMed Central

    Fowler, Austin G.; Devitt, Simon J.; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868

  20. Improved neutron activation prediction code system development

    NASA Technical Reports Server (NTRS)

    Saqui, R. M.

    1971-01-01

    Two integrated neutron activation prediction code systems have been developed by modifying and integrating existing computer programs to perform the necessary computations to determine neutron induced activation gamma ray doses and dose rates in complex geometries. Each of the two systems is comprised of three computational modules. The first program module computes the spatial and energy distribution of the neutron flux from an input source and prepares input data for the second program which performs the reaction rate, decay chain and activation gamma source calculations. A third module then accepts input prepared by the second program to compute the cumulative gamma doses and/or dose rates at specified detector locations in complex, three-dimensional geometries.

  1. Comparing Different Strategies in Directed Evolution of Enzyme Stereoselectivity: Single- versus Double-Code Saturation Mutagenesis.

    PubMed

    Sun, Zhoutong; Lonsdale, Richard; Li, Guangyue; Reetz, Manfred T

    2016-10-04

    Saturation mutagenesis at sites lining the binding pockets of enzymes constitutes a viable protein engineering technique for enhancing or inverting stereoselectivity. Statistical analysis shows that oversampling in the screening step (the bottleneck) increases astronomically as the number of residues in the randomization site increases, which is the reason why reduced amino acid alphabets have been employed, in addition to splitting large sites into smaller ones. Limonene epoxide hydrolase (LEH) has previously served as the experimental platform in these methodological efforts, enabling comparisons between single-code saturation mutagenesis (SCSM) and triple-code saturation mutagenesis (TCSM); these employ either only one or three amino acids, respectively, as building blocks. In this study the comparative platform is extended by exploring the efficacy of double-code saturation mutagenesis (DCSM), in which the reduced amino acid alphabet consists of two members, chosen according to the principles of rational design on the basis of structural information. The hydrolytic desymmetrization of cyclohexene oxide is used as the model reaction, with formation of either (R,R)- or (S,S)-cyclohexane-1,2-diol. DCSM proves to be clearly superior to the likewise tested SCSM, affording both R,R- and S,S-selective mutants. These variants are also good catalysts in reactions of further substrates. Docking computations reveal the basis of enantioselectivity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    ERIC Educational Resources Information Center

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  3. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  4. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  5. Industrial Code Development

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1991-01-01

    The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.

  6. New optimal asymmetric quantum codes constructed from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Lü, Liangdong

    2017-02-01

    In this paper, we propose the construction of asymmetric quantum codes from two families of constacyclic codes over finite field 𝔽q2 of code length n, where for the first family, q is an odd prime power with the form 4t + 1 (t ≥ 1 is integer) or 4t - 1 (t ≥ 2 is integer) and n1 = q2+1 2; for the second family, q is an odd prime power with the form 10t + 3 or 10t + 7 (t ≥ 0 is integer) and n2 = q2+1 5. As a result, families of new asymmetric quantum codes [[n,k,dz/dx

  7. Quantum error-correcting codes from algebraic geometry codes of Castle type

    NASA Astrophysics Data System (ADS)

    Munuera, Carlos; Tenório, Wanderson; Torres, Fernando

    2016-10-01

    We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.

  8. Theoretical research program to study chemical reactions in AOTV bow shock tubes

    NASA Technical Reports Server (NTRS)

    Taylor, P.

    1986-01-01

    Progress in the development of computational methods for the characterization of chemical reactions in aerobraking orbit transfer vehicle (AOTV) propulsive flows is reported. Two main areas of code development were undertaken: (1) the implementation of CASSCF (complete active space self-consistent field) and SCF (self-consistent field) analytical first derivatives on the CRAY X-MP; and (2) the installation of the complete set of electronic structure codes on the CRAY 2. In the area of application calculations the main effort was devoted to performing full configuration-interaction calculations and using these results to benchmark other methods. Preprints describing some of the systems studied are included.

  9. A World of Ideas: International Survey Gives a Voice to Teachers Everywhere

    ERIC Educational Resources Information Center

    Crow, Tracy

    2013-01-01

    Kristen Weatherby is a senior policy analyst at OECD in the education directorate. She runs the Teaching and Learning International Survey (TALIS) and is author or co-author of publications and blog posts on TALIS and teachers. She started her career as a classroom teacher in the United States before working in education in the private sector in…

  10. Assessment and Requirements of Nuclear Reaction Databases for GCR Transport in the Atmosphere and Structures

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.

    1998-01-01

    The transport properties of galactic cosmic rays (GCR) in the atmosphere, material structures, and human body (self-shielding) am of interest in risk assessment for supersonic and subsonic aircraft and for space travel in low-Earth orbit and on interplanetary missions. Nuclear reactions, such as knockout and fragmentation, present large modifications of particle type and energies of the galactic cosmic rays in penetrating materials. We make an assessment of the current nuclear reaction models and improvements in these model for developing required transport code data bases. A new fragmentation data base (QMSFRG) based on microscopic models is compared to the NUCFRG2 model and implications for shield assessment made using the HZETRN radiation transport code. For deep penetration problems, the build-up of light particles, such as nucleons, light clusters and mesons from nuclear reactions in conjunction with the absorption of the heavy ions, leads to the dominance of the charge Z = 0, 1, and 2 hadrons in the exposures at large penetration depths. Light particles are produced through nuclear or cluster knockout and in evaporation events with characteristically distinct spectra which play unique roles in the build-up of secondary radiation's in shielding. We describe models of light particle production in nucleon and heavy ion induced reactions and make an assessment of the importance of light particle multiplicity and spectral parameters in these exposures.

  11. Schroedinger’s code: Source code availability and transparency in astrophysics

    NASA Astrophysics Data System (ADS)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  12. Phase synchronization motion and neural coding in dynamic transmission of neural information.

    PubMed

    Wang, Rubin; Zhang, Zhikang; Qu, Jingyi; Cao, Jianting

    2011-07-01

    In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion.

  13. Mass correlation between light and heavy reaction products in multinucleon transfer 197Au+130Te collisions

    NASA Astrophysics Data System (ADS)

    Galtarossa, F.; Corradi, L.; Szilner, S.; Fioretto, E.; Pollarolo, G.; Mijatović, T.; Montanari, D.; Ackermann, D.; Bourgin, D.; Courtin, S.; Fruet, G.; Goasduff, A.; Grebosz, J.; Haas, F.; Jelavić Malenica, D.; Jeong, S. C.; Jia, H. M.; John, P. R.; Mengoni, D.; Milin, M.; Montagnoli, G.; Scarlassara, F.; Skukan, N.; Soić, N.; Stefanini, A. M.; Strano, E.; Tokić, V.; Ur, C. A.; Valiente-Dobón, J. J.; Watanabe, Y. X.

    2018-05-01

    We studied multinucleon transfer reactions in the 197Au+130Te system at Elab=1.07 GeV by employing the PRISMA magnetic spectrometer coupled to a coincident detector. For each light fragment we constructed, in coincidence, the distribution in mass of the heavy partner of the reaction. With a Monte Carlo method, starting from the binary character of the reaction, we simulated the de-excitation process of the produced heavy fragments to be able to understand their final mass distribution. The total cross sections for pure neutron transfer channels have also been extracted and compared with calculations performed with the grazing code.

  14. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  15. More box codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    A new investigation shows that, starting from the BCH (21,15;3) code represented as a 7 x 3 matrix and adding a row and column to add even parity, one obtains an 8 x 4 matrix (32,15;8) code. An additional dimension is obtained by specifying odd parity on the rows and even parity on the columns, i.e., adjoining to the 8 x 4 matrix, the matrix, which is zero except for the fourth column (of all ones). Furthermore, any seven rows and three columns will form the BCH (21,15;3) code. This box code has the same weight structure as the quadratic residue and BCH codes of the same dimensions. Whether there exists an algebraic isomorphism to either code is as yet unknown.

  16. Suite of Benchmark Tests to Conduct Mesh-Convergence Analysis of Nonlinear and Non-constant Coefficient Transport Codes

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2014-12-01

    Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive

  17. Coding in Muscle Disease.

    PubMed

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  18. Comparison between calculation and measured data on secondary neutron energy spectra by heavy ion reactions from different thick targets.

    PubMed

    Iwase, H; Wiegel, B; Fehrenbacher, G; Schardt, D; Nakamura, T; Niita, K; Radon, T

    2005-01-01

    Measured neutron energy fluences from high-energy heavy ion reactions through targets several centimeters to several hundred centimeters thick were compared with calculations made using the recently developed general-purpose particle and heavy ion transport code system (PHITS). It was confirmed that the PHITS represented neutron production by heavy ion reactions and neutron transport in thick shielding with good overall accuracy.

  19. cncRNAs: Bi-functional RNAs with protein coding and non-coding functions

    PubMed Central

    Kumari, Pooja; Sampath, Karuna

    2015-01-01

    For many decades, the major function of mRNA was thought to be to provide protein-coding information embedded in the genome. The advent of high-throughput sequencing has led to the discovery of pervasive transcription of eukaryotic genomes and opened the world of RNA-mediated gene regulation. Many regulatory RNAs have been found to be incapable of protein coding and are hence termed as non-coding RNAs (ncRNAs). However, studies in recent years have shown that several previously annotated non-coding RNAs have the potential to encode proteins, and conversely, some coding RNAs have regulatory functions independent of the protein they encode. Such bi-functional RNAs, with both protein coding and non-coding functions, which we term as ‘cncRNAs’, have emerged as new players in cellular systems. Here, we describe the functions of some cncRNAs identified from bacteria to humans. Because the functions of many RNAs across genomes remains unclear, we propose that RNAs be classified as coding, non-coding or both only after careful analysis of their functions. PMID:26498036

  20. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Austregesilo, H.; Velkov, K.

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.

  1. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  2. Probing Trapped Ion Energies Via Ion-Molecule Reaction Kinetics: Fourier Transform Ion Cyclotron Resonance Mass Spectrometry

    DTIC Science & Technology

    1992-05-28

    ORGANIZATION (if applicable) Office of Naval Research N00014-87- j - 1248 Bc. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS 800 N. Quincy St...RESEARCH Grant NOOO14-87- J -1248 R & T Code 4134052 TECHNICAL REPORT NO. 36 Probing Trapped Ion Energies Via Ion-Molecule Reaction Kinetics: Fourier...reactivity (for charge transfer with N2) of the higher energy J =1/2 state is approximately three times that of the J =3/2 state at collision energies

  3. Ensemble coding of face identity is not independent of the coding of individual identity.

    PubMed

    Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina

    2018-06-01

    Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.

  4. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species With Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2013-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for nearequilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion of the heating and is then compared to the total heating measured in flight.

  5. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  6. Cracking the code: the accuracy of coding shoulder procedures and the repercussions.

    PubMed

    Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M

    2013-05-01

    Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p < 0.0001) and the correct procedure code (odds ratio 310.0, p < 0.0001). Using the proforma resulted in a £28,562 increase in revenue for the 100 patients evaluated relative to the income generated from the coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.

  7. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presentedmore » in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.« less

  8. Program Helps To Determine Chemical-Reaction Mechanisms

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Radhakrishnan, K.

    1995-01-01

    General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.

  9. Trace-shortened Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Solomon, G.

    1994-01-01

    Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.

  10. Variation of SNOMED CT coding of clinical research concepts among coding experts.

    PubMed

    Andrews, James E; Richesson, Rachel L; Krischer, Jeffrey

    2007-01-01

    To compare consistency of coding among professional SNOMED CT coders representing three commercial providers of coding services when coding clinical research concepts with SNOMED CT. A sample of clinical research questions from case report forms (CRFs) generated by the NIH-funded Rare Disease Clinical Research Network (RDCRN) were sent to three coding companies with instructions to code the core concepts using SNOMED CT. The sample consisted of 319 question/answer pairs from 15 separate studies. The companies were asked to select SNOMED CT concepts (in any form, including post-coordinated) that capture the core concept(s) reflected in the question. Also, they were asked to state their level of certainty, as well as how precise they felt their coding was. Basic frequencies were calculated to determine raw level agreement among the companies and other descriptive information. Krippendorff's alpha was used to determine a statistical measure of agreement among the coding companies for several measures (semantic, certainty, and precision). No significant level of agreement among the experts was found. There is little semantic agreement in coding of clinical research data items across coders from 3 professional coding services, even using a very liberal definition of agreement.

  11. Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns

    NASA Technical Reports Server (NTRS)

    Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.

    2006-01-01

    Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.

  12. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  13. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Lumpkin, Forrest E., III; Gati, Frank; Yuko, James R.; Motil, Brian J.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module was performed using MSC Patran/Pthermal. The obtained temperature results showed that thermal protection is necessary because of significant heating from the plume.

  14. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  15. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Neutron production cross sections for (d,n) reactions at 55 MeV

    NASA Astrophysics Data System (ADS)

    Wakasa, T.; Goto, S.; Matsuno, M.; Mitsumoto, S.; Okada, T.; Oshiro, H.; Sakaguchi, S.

    2017-08-01

    The cross sections for (d,n) reactions on {}^natC-{}^{197}Au have been measured at a bombarding energy of 55 MeV and a laboratory scattering angle of θ_lab = 9.5°. The angular distributions for the {}^natC(d,n) reaction have also been obtained at θ_lab = 0°-40°. The neutron energy spectra are dominated by deuteron breakup contributions and their peak positions can be reasonably reproduced by considering the Coulomb force effects. The data are compared with the TENDL-2015 nuclear data and Particle and Heavy Ion Transport code System (PHITS) calculations. Both calculations fail to reproduce the measured energy spectra and angular distributions.

  17. Coding of Neuroinfectious Diseases.

    PubMed

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  18. Diagnostic Coding for Epilepsy.

    PubMed

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  19. Estimating neutron dose equivalent rates from heavy ion reactions around 10 MeV amu(-1) using the PHITS code.

    PubMed

    Iwamoto, Yosuke; Ronningen, R M; Niita, Koji

    2010-04-01

    It has been sometimes necessary for personnel to work in areas where low-energy heavy ions interact with targets or with beam transport equipment and thereby produce significant levels of radiation. Methods to predict doses and to assist shielding design are desirable. The Particle and Heavy Ion Transport code System (PHITS) has been typically used to predict radiation levels around high-energy (above 100 MeV amu(-1)) heavy ion accelerator facilities. However, predictions by PHITS of radiation levels around low-energy (around 10 MeV amu(-1)) heavy ion facilities to our knowledge have not yet been investigated. The influence of the "switching time" in PHITS calculations of low-energy heavy ion reactions, defined as the time when the JAERI Quantum Molecular Dynamics model (JQMD) calculation stops and the Generalized Evaporation Model (GEM) calculation begins, was studied using neutron energy spectra from 6.25 MeV amu(-1) and 10 MeV amu(-1) (12)C ions and 10 MeV amu(-1) (16)O ions incident on a copper target. Using a value of 100 fm c(-1) for the switching time, calculated neutron energy spectra obtained agree well with the experimental data. PHITS was then used with the switching time of 100 fm c(-1) to simulate an experimental study by Ohnesorge et al. by calculating neutron dose equivalent rates produced by 3 MeV amu(-1) to 16 MeV amu(-1) (12)C, (14)N, (16)O, and (20)Ne beams incident on iron, nickel and copper targets. The calculated neutron dose equivalent rates agree very well with the data and follow a general pattern which appears to be insensitive to the heavy ion species but is sensitive to the target material.

  20. Report number codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, R.N.

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in thismore » publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.« less

  1. Speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfullymore » regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that

  2. Coupled-channel analyses on 16O + 147,148,150,152,154Sm heavy-ion fusion reactions

    NASA Astrophysics Data System (ADS)

    Erol, Burcu; Yılmaz, Ahmet Hakan

    2018-02-01

    Heavy-ion collisons are typically characterized by the presence of many open reaction channels. In the energies around the Coulomb barrier, the main processes are elastic scattering, inelastic excitations of low-lying modes and fusion operations of one or two nuclei. The fusion process is generally defined as the effect of one-dimensional barrier penetration model, taking scattering potential as the sum of Coulomb and proximity potential. We have performed heay-ion fusion reactions with coupled-channel (CC) calculations. Coupled-channel formalism is carried out under barrier energy in heavy-ion fusion reactions. In this work fusion cross sections have been calculated and analyzed in detail for the five systems 16O + 147,148,150,152,154sm in the framework of coupled-channel approach (using the codes CCFUS and CCDEF) and Wong Formula. Calculated results are compared with experimental data, CC calculations using code CCFULL and with the cross section datas taken from `nrv'. CCDEF, CCFULL and Wong Formula explains the fusion reactions of heavy-ions very well, while using the scattering potential as WOODS-SAXON volume potential with Akyuz-Winther parameters. It was observed that AW potential parameters are able to reproduce the experimentally observed fusion cross sections reasonably well for these systems. There is a good agreement between the calculated results with the experimental and nrv[8] results.

  3. APOLLO: A computer program for the calculation of chemical equilibrium and reaction kinetics of chemical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, H.D.

    1991-11-01

    Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a ``glass like`` material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable tomore » other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.« less

  4. APOLLO: A computer program for the calculation of chemical equilibrium and reaction kinetics of chemical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, H.D.

    1991-11-01

    Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a glass like'' material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable tomore » other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.« less

  5. Identification of Hospitalizations for Intentional Self-Harm when E-Codes are Incompletely Recorded

    PubMed Central

    Patrick, Amanda R.; Miller, Matthew; Barber, Catherine W.; Wang, Philip S.; Canning, Claire F.; Schneeweiss, Sebastian

    2010-01-01

    Context Suicidal behavior has gained attention as an adverse outcome of prescription drug use. Hospitalizations for intentional self-harm, including suicide, can be identified in administrative claims databases using external cause of injury codes (E-codes). However, rates of E-code completeness in US government and commercial claims databases are low due to issues with hospital billing software. Objective To develop an algorithm to identify intentional self-harm hospitalizations using recorded injury and psychiatric diagnosis codes in the absence of E-code reporting. Methods We sampled hospitalizations with an injury diagnosis (ICD-9 800–995) from 2 databases with high rates of E-coding completeness: 1999–2001 British Columbia, Canada data and the 2004 U.S. Nationwide Inpatient Sample. Our gold standard for intentional self-harm was a diagnosis of E950-E958. We constructed algorithms to identify these hospitalizations using information on type of injury and presence of specific psychiatric diagnoses. Results The algorithm that identified intentional self-harm hospitalizations with high sensitivity and specificity was a diagnosis of poisoning; toxic effects; open wound to elbow, wrist, or forearm; or asphyxiation; plus a diagnosis of depression, mania, personality disorder, psychotic disorder, or adjustment reaction. This had a sensitivity of 63%, specificity of 99% and positive predictive value (PPV) of 86% in the Canadian database. Values in the US data were 74%, 98%, and 73%. PPV was highest (80%) in patients under 25 and lowest those over 65 (44%). Conclusions The proposed algorithm may be useful for researchers attempting to study intentional self-harm in claims databases with incomplete E-code reporting, especially among younger populations. PMID:20922709

  6. Nuclear reaction measurements on tissue-equivalent materials and GEANT4 Monte Carlo simulations for hadrontherapy

    NASA Astrophysics Data System (ADS)

    De Napoli, M.; Romano, F.; D'Urso, D.; Licciardello, T.; Agodi, C.; Candiano, G.; Cappuzzello, F.; Cirrone, G. A. P.; Cuttone, G.; Musumarra, A.; Pandola, L.; Scuderi, V.

    2014-12-01

    When a carbon beam interacts with human tissues, many secondary fragments are produced into the tumor region and the surrounding healthy tissues. Therefore, in hadrontherapy precise dose calculations require Monte Carlo tools equipped with complex nuclear reaction models. To get realistic predictions, however, simulation codes must be validated against experimental results; the wider the dataset is, the more the models are finely tuned. Since no fragmentation data for tissue-equivalent materials at Fermi energies are available in literature, we measured secondary fragments produced by the interaction of a 55.6 MeV u-1 12C beam with thick muscle and cortical bone targets. Three reaction models used by the Geant4 Monte Carlo code, the Binary Light Ions Cascade, the Quantum Molecular Dynamic and the Liege Intranuclear Cascade, have been benchmarked against the collected data. In this work we present the experimental results and we discuss the predictive power of the above mentioned models.

  7. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  8. User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.

    1982-01-01

    This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.

  9. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  10. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  11. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  12. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  13. Human talus bones from the Middle Pleistocene site of Sima de los Huesos (Sierra de Atapuerca, Burgos, Spain).

    PubMed

    Pablos, Adrián; Martínez, Ignacio; Lorenzo, Carlos; Gracia, Ana; Sala, Nohemi; Arsuaga, Juan Luis

    2013-07-01

    Here we present and describe comparatively 25 talus bones from the Middle Pleistocene site of the Sima de los Huesos (SH) (Sierra de Atapuerca, Burgos, Spain). These tali belong to 14 individuals (11 adult and three immature). Although variation among Middle and Late Pleistocene tali tends to be subtle, this study has identified unique morphological characteristics of the SH tali. They are vertically shorter than those of Late Pleistocene Homo sapiens, and show a shorter head and a broader lateral malleolar facet than all of the samples. Moreover, a few shared characters with Neanderthals are consistent with the hypothesis that the SH population and Neanderthals are sister groups. These shared characters are a broad lateral malleolar facet, a trochlear height intermediate between modern humans and Late Pleistocene H. sapiens, and a short middle calcaneal facet. It has been possible to propose sex assignment for the SH tali based on their size. Stature estimates based on these fossils give a mean stature of 174.4 cm for males and 161.9 cm for females, similar to that obtained based on the long bones from this same site. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. METHES: A Monte Carlo collision code for the simulation of electron transport in low temperature plasmas

    NASA Astrophysics Data System (ADS)

    Rabie, M.; Franck, C. M.

    2016-06-01

    We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.

  15. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  16. A Review on Spectral Amplitude Coding Optical Code Division Multiple Access

    NASA Astrophysics Data System (ADS)

    Kaur, Navpreet; Goyal, Rakesh; Rani, Monika

    2017-06-01

    This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.

  17. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  18. Manually operated coded switch

    DOEpatents

    Barnette, Jon H.

    1978-01-01

    The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.

  19. Phonological coding during reading.

    PubMed

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  20. Reaction dynamics studies for the system 7Be+58Ni

    NASA Astrophysics Data System (ADS)

    Torresi, D.; Mazzocco, M.; Acosta, L.; Boiano, A.; Boiano, C.; Diaz-Torres, A.; Fierro, N.; Glodariu, T.; Grilj, L.; Guglielmetti, A.; Keeley, N.; La Commara, M.; Martel, I.; Mazzocchi, C.; Molini, P.; Pakou, A.; Parascandolo, C.; Parkar, V. V.; Patronis, N.; Pierroutsakou, D.; Romoli, M.; Rusek, K.; Sanchez-Benitez, A. M.; Sandoli, M.; Signorini, C.; Silvestri, R.; Soramel, F.; Stiliaris, E.; Strano, E.; Stroe, L.; Zerva, K.

    2015-04-01

    The study of reactions induced by exotic weakly bound nuclei at energies around the Coulomb barrier had attracted a large interest in the last decade, since the features of these nuclei can deeply affect the reaction dynamics. The discrimination between different reaction mechanisms is, in general, a rather difficult task. It can be achieved by using detector arrays covering high solid angle and with high granularity that allow to measure the reaction products and, possibly, coincidences between them, as, for example, recently done for stable weakly bound nuclei [1, 2]. We investigated the collision of the weakly bound nucleus 7Be on a 58Ni target at the beam energy of 1.1 times the Coulomb barrier, measuring the elastic scattering angular distribution and the energy and angular distributions of 3He and 4He. The 7Be radioactive ion beam was produced by the facility EXOTIC at INFN-LNL with an energy of 22 MeV and an intensity of ~3×105 pps. Results showed that the 4He yeld is about 4 times larger than 3He yield, suggesting that reaction mechanisms other than the break-up mostly produce the He isotopes. Theoretical calculations for transfer channels and compound nucleus reactions suggest that complete fusion accounts for (41±5%) of the total reaction cross section extracted from optical model analysis of the elastic scattering data, and that 3He and 4He stripping are the most populated reaction channels among direct processes. Eventually estimation of incomplete fusion contributions to the 3,4He production cross sections was performed through semi-classical calculations with the code PLATYPUS [3].

  1. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode

  2. Bar Code Labels

    NASA Technical Reports Server (NTRS)

    1988-01-01

    American Bar Codes, Inc. developed special bar code labels for inventory control of space shuttle parts and other space system components. ABC labels are made in a company-developed anodizing aluminum process and consecutively marketed with bar code symbology and human readable numbers. They offer extreme abrasion resistance and indefinite resistance to ultraviolet radiation, capable of withstanding 700 degree temperatures without deterioration and up to 1400 degrees with special designs. They offer high resistance to salt spray, cleaning fluids and mild acids. ABC is now producing these bar code labels commercially or industrial customers who also need labels to resist harsh environments.

  3. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  4. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  5. Subspace-Aware Index Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.

    In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less

  6. Subspace-Aware Index Codes

    DOE PAGES

    Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.

    2017-04-12

    In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less

  7. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  8. Identification of coding and non-coding mutational hotspots in cancer genomes.

    PubMed

    Piraino, Scott W; Furney, Simon J

    2017-01-05

    The identification of mutations that play a causal role in tumour development, so called "driver" mutations, is of critical importance for understanding how cancers form and how they might be treated. Several large cancer sequencing projects have identified genes that are recurrently mutated in cancer patients, suggesting a role in tumourigenesis. While the landscape of coding drivers has been extensively studied and many of the most prominent driver genes are well characterised, comparatively less is known about the role of mutations in the non-coding regions of the genome in cancer development. The continuing fall in genome sequencing costs has resulted in a concomitant increase in the number of cancer whole genome sequences being produced, facilitating systematic interrogation of both the coding and non-coding regions of cancer genomes. To examine the mutational landscapes of tumour genomes we have developed a novel method to identify mutational hotspots in tumour genomes using both mutational data and information on evolutionary conservation. We have applied our methodology to over 1300 whole cancer genomes and show that it identifies prominent coding and non-coding regions that are known or highly suspected to play a role in cancer. Importantly, we applied our method to the entire genome, rather than relying on predefined annotations (e.g. promoter regions) and we highlight recurrently mutated regions that may have resulted from increased exposure to mutational processes rather than selection, some of which have been identified previously as targets of selection. Finally, we implicate several pan-cancer and cancer-specific candidate non-coding regions, which could be involved in tumourigenesis. We have developed a framework to identify mutational hotspots in cancer genomes, which is applicable to the entire genome. This framework identifies known and novel coding and non-coding mutional hotspots and can be used to differentiate candidate driver regions from

  9. An algorithm for treatment of patients with hypersensitivity reactions after vaccines.

    PubMed

    Wood, Robert A; Berger, Melvin; Dreskin, Stephen C; Setse, Rosanna; Engler, Renata J M; Dekker, Cornelia L; Halsey, Neal A

    2008-09-01

    Concerns about possible allergic reactions to immunizations are raised frequently by both patients/parents and primary care providers. Estimates of true allergic, or immediate hypersensitivity, reactions to routine vaccines range from 1 per 50000 doses for diphtheria-tetanus-pertussis to approximately 1 per 500000 to 1000000 doses for most other vaccines. In a large study from New Zealand, data were collected during a 5-year period on 15 marketed vaccines and revealed an estimated rate of 1 immediate hypersensitivity reaction per 450000 doses of vaccine administered. Another large study, conducted within the Vaccine Safety Datalink, described a range of reaction rates to >7.5 million doses. Depending on the study design and the time after the immunization event, reaction rates varied from 0.65 cases per million doses to 1.53 cases per million doses when additional allergy codes were included. For some vaccines, particularly when allergens such as gelatin are part of the formulation (eg, Japanese encephalitis), higher rates of serious allergic reactions may occur. Although these per-dose estimates suggest that true hypersensitivity reactions are quite rare, the large number of doses that are administered, especially for the commonly used vaccines, makes this a relatively common clinical problem. In this review, we present background information on vaccine hypersensitivity, followed by a detailed algorithm that provides a rational and organized approach for the evaluation and treatment of patients with suspected hypersensitivity. We then include 3 cases of suspected allergic reactions to vaccines that have been referred to the Clinical Immunization Safety Assessment network to demonstrate the practical application of the algorithm.

  10. DEPENDENCE OF X-RAY BURST MODELS ON NUCLEAR REACTION RATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cyburt, R. H.; Keek, L.; Schatz, H.

    2016-10-20

    X-ray bursts are thermonuclear flashes on the surface of accreting neutron stars, and reliable burst models are needed to interpret observations in terms of properties of the neutron star and the binary system. We investigate the dependence of X-ray burst models on uncertainties in (p, γ ), ( α , γ ), and ( α , p) nuclear reaction rates using fully self-consistent burst models that account for the feedbacks between changes in nuclear energy generation and changes in astrophysical conditions. A two-step approach first identified sensitive nuclear reaction rates in a single-zone model with ignition conditions chosen to matchmore » calculations with a state-of-the-art 1D multi-zone model based on the Kepler stellar evolution code. All relevant reaction rates on neutron-deficient isotopes up to mass 106 were individually varied by a factor of 100 up and down. Calculations of the 84 changes in reaction rate with the highest impact were then repeated in the 1D multi-zone model. We find a number of uncertain reaction rates that affect predictions of light curves and burst ashes significantly. The results provide insights into the nuclear processes that shape observables from X-ray bursts, and guidance for future nuclear physics work to reduce nuclear uncertainties in X-ray burst models.« less

  11. The r-Java 2.0 code: nuclear physics

    NASA Astrophysics Data System (ADS)

    Kostka, M.; Koning, N.; Shand, Z.; Ouyed, R.; Jaikumar, P.

    2014-08-01

    Aims: We present r-Java 2.0, a nucleosynthesis code for open use that performs r-process calculations, along with a suite of other analysis tools. Methods: Equipped with a straightforward graphical user interface, r-Java 2.0 is capable of simulating nuclear statistical equilibrium (NSE), calculating r-process abundances for a wide range of input parameters and astrophysical environments, computing the mass fragmentation from neutron-induced fission and studying individual nucleosynthesis processes. Results: In this paper we discuss enhancements to this version of r-Java, especially the ability to solve the full reaction network. The sophisticated fission methodology incorporated in r-Java 2.0 that includes three fission channels (beta-delayed, neutron-induced, and spontaneous fission), along with computation of the mass fragmentation, is compared to the upper limit on mass fission approximation. The effects of including beta-delayed neutron emission on r-process yield is studied. The role of Coulomb interactions in NSE abundances is shown to be significant, supporting previous findings. A comparative analysis was undertaken during the development of r-Java 2.0 whereby we reproduced the results found in the literature from three other r-process codes. This code is capable of simulating the physical environment of the high-entropy wind around a proto-neutron star, the ejecta from a neutron star merger, or the relativistic ejecta from a quark nova. Likewise the users of r-Java 2.0 are given the freedom to define a custom environment. This software provides a platform for comparing proposed r-process sites.

  12. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  13. Extension of the BRYNTRN code to monoenergetic light ion beams

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.

    1994-01-01

    A monoenergetic version of the BRYNTRN transport code is extended to beam transport of light ions (H-2, H-3, He-3, and He-4) in shielding materials (thick targets). The redistribution of energy in nuclear reactions is included in transport solutions that use nuclear fragmentation models. We also consider an equilibrium target-fragment spectrum for nuclei with mass number greater than four to include target fragmentation effects in the linear energy transfer (LET) spectrum. Illustrative results for water and aluminum shielding, including energy and LET spectra, are discussed for high-energy beams of H-2 and He-4.

  14. Is it Code Imperfection or 'garbage in Garbage Out'? Outline of Experiences from a Comprehensive Adr Code Verification

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2013-12-01

    spurious wiggles. Thereby, we provide objective, quantitative values as opposed to subjective qualitative descriptions as 'weak' or 'satisfactory' agreement with those metrics. We start testing from a simple case of unidirectional advection, then bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. For all of the mentioned cases we conduct mesh convergence tests. These tests compare the results' order of accuracy versus the formal order of accuracy of discretization. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available we utilize Symmetry, Complete Richardson Extrapolation and Method of False Injection to uncover bugs. Detailed discussions of capabilities of the mentioned code verification techniques are given. Auxiliary subroutines for automation of the test suit and report generation are designed. All in all, the test package is not only a robust tool for code verification but also it provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport.

  15. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  16. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    DOE PAGES

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  17. Theory of epigenetic coding.

    PubMed

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  18. Bandwidth efficient coding for satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.

    1992-01-01

    An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.

  19. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  20. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  1. Nuclear Reaction Models Responsible for Simulation of Neutron-induced Soft Errors in Microelectronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.

    Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less

  2. Nuclear Reaction Models Responsible for Simulation of Neutron-induced Soft Errors in Microelectronics

    NASA Astrophysics Data System (ADS)

    Watanabe, Y.; Abe, S.

    2014-06-01

    Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.

  3. Long non-coding RNA expression profile in cervical cancer tissues

    PubMed Central

    Zhu, Hua; Chen, Xiangjian; Hu, Yan; Shi, Zhengzheng; Zhou, Qing; Zheng, Jingjie; Wang, Yifeng

    2017-01-01

    Cervical cancer (CC), one of the most common types of cancer of the female population, presents an enormous challenge in diagnosis and treatment. Long non-coding (lnc)RNAs, non-coding (nc)RNAs with length >200 nucleotides, have been identified to be associated with multiple types of cancer, including CC. This class of nc transcripts serves an important role in tumor suppression and oncogenic signaling pathways. In the present study, the microarray method was used to obtain the expression profile of lncRNAs and protein-coding mRNAs and to compare the expression of lncRNAs between CC tissues and corresponding adjacent non-cancerous tissues in order to screen potential lncRNAs for associations with CC. Overall, 3356 lncRNAs with significantly different expression pattern in CC tissues compared with adjacent non-cancerous tissues were identified, while 1,857 of them were upregulated. These differentially expressed lncRNAs were additionally classified into 5 subgroups. Reverse transcription quantitative polymerase chain reactions were performed to validate the expression pattern of 5 random selected lncRNAs, and 2lncRNAs were identified to have significantly different expression in CC samples compared with adjacent non-cancerous tissues. This finding suggests that those lncRNAs with different expression may serve important roles in the development of CC, and the expression data may provide information for additional study on the involvement of lncRNAs in CC. PMID:28789353

  4. Coding and decoding for code division multiple user communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  5. Role of nuclear reactions on stellar evolution of intermediate-mass stars

    NASA Astrophysics Data System (ADS)

    Möller, H.; Jones, S.; Fischer, T.; Martínez-Pinedo, G.

    2018-01-01

    The evolution of intermediate-mass stars (8 - 12 solar masses) represents one of the most challenging subjects in nuclear astrophysics. Their final fate is highly uncertain and strongly model dependent. They can become white dwarfs, they can undergo electron-capture or core-collapse supernovae or they might even proceed towards explosive oxygen burning and a subsequent thermonuclear explosion. We believe that an accurate description of nuclear reactions is crucial for the determination of the pre-supernova structure of these stars. We argue that due to the possible development of an oxygen-deflagration, a hydrodynamic description has to be used. We implement a nuclear reaction network with ∼200 nuclear species into the implicit hydrodynamic code AGILE. The reaction network considers all relevant nuclear electron captures and beta-decays. For selected relevant nuclear species, we include a set of updated reaction rates, for which we discuss the role for the evolution of the stellar core, at the example of selected stellar models. We find that the final fate of these intermediate-mass stars depends sensitively on the density threshold for weak processes that deleptonize the core.

  6. A Radiation Chemistry Code Based on the Greens Functions of the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Ionizing radiation produces several radiolytic species such as.OH, e-aq, and H. when interacting with biological matter. Following their creation, radiolytic species diffuse and chemically react with biological molecules such as DNA. Despite years of research, many questions on the DNA damage by ionizing radiation remains, notably on the indirect effect, i.e. the damage resulting from the reactions of the radiolytic species with DNA. To simulate DNA damage by ionizing radiation, we are developing a step-by-step radiation chemistry code that is based on the Green's functions of the diffusion equation (GFDE), which is able to follow the trajectories of all particles and their reactions with time. In the recent years, simulations based on the GFDE have been used extensively in biochemistry, notably to simulate biochemical networks in time and space and are often used as the "gold standard" to validate diffusion-reaction theories. The exact GFDE for partially diffusion-controlled reactions is difficult to use because of its complex form. Therefore, the radial Green's function, which is much simpler, is often used. Hence, much effort has been devoted to the sampling of the radial Green's functions, for which we have developed a sampling algorithm This algorithm only yields the inter-particle distance vector length after a time step; the sampling of the deviation angle of the inter-particle vector is not taken into consideration. In this work, we show that the radial distribution is predicted by the exact radial Green's function. We also use a technique developed by Clifford et al. to generate the inter-particle vector deviation angles, knowing the inter-particle vector length before and after a time step. The results are compared with those predicted by the exact GFDE and by the analytical angular functions for free diffusion. This first step in the creation of the radiation chemistry code should help the understanding of the contribution of the indirect effect in the

  7. Presenting a new kinetic model for methanol to light olefins reactions over a hierarchical SAPO-34 catalyst using the Langmuir-Hinshelwood-Hougen-Watson mechanism

    NASA Astrophysics Data System (ADS)

    Javad Azarhoosh, Mohammad; Halladj, Rouein; Askari, Sima

    2017-10-01

    In this study, a new kinetic model for methanol to light olefins (MTO) reactions over a hierarchical SAPO-34 catalyst using the Langmuir-Hinshelwood-Hougen-Watson (LHHW) mechanism was presented and the kinetic parameters was obtained using a genetic algorithm (GA) and genetic programming (GP). Several kinetic models for the MTO reactions have been presented. However, due to the complexity of the reactions, most reactions are considered lumped and elementary, which cannot be deemed a completely accurate kinetic model of the process. Therefore, in this study, the LHHW mechanism is presented as kinetic models of MTO reactions. Because of the non-linearity of the kinetic models and existence of many local optimal points, evolutionary algorithms (GA and GP) are used in this study to estimate the kinetic parameters in the rate equations. Via the simultaneous connection of the code related to modelling the reactor and the GA and GP codes in the MATLAB R2013a software, optimization of the kinetic models parameters was performed such that the least difference between the results from the kinetic models and experiential results was obtained and the best kinetic parameters of MTO process reactions were achieved. A comparison of the results from the model with experiential results showed that the present model possesses good accuracy.

  8. New Approach for Nuclear Reaction Model in the Combination of Intra-nuclear Cascade and DWBA

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Iwamoto, O.; Iwamoto, Y.; Sato, T.; Niita, K.

    2014-04-01

    We applied a new nuclear reaction model that is a combination of the intra nuclear cascade model and the distorted wave Born approximation (DWBA) calculation to estimate neutron spectra in reactions induced by protons incident on 7Li and 9Be targets at incident energies below 50 MeV, using the particle and heavy ion transport code system (PHITS). The results obtained by PHITS with the new model reproduce the sharp peaks observed in the experimental double-differential cross sections as a result of taking into account transitions between discrete nuclear states in the DWBA. An excellent agreement was observed between the calculated results obtained using the combination model and experimental data on neutron yields from thick targets in the inclusive (p, xn) reaction.

  9. Students' Views and Attitudes Towards the Communication Code Used in Press Articles About Science

    NASA Astrophysics Data System (ADS)

    Halkia, Krystallia; Mantzouridis, Dimitris

    2005-10-01

    The present research was designed to investigate the reaction of secondary school students to the communication code that the press uses in science articles: it attempts to trace which communication techniques can be of potential use in science education. The sample of the research consists of 351 secondary school students. The research instrument is a questionnaire, which attempts to trace students’ preferences regarding newspaper science articles, to explore students’ attitudes towards the science articles published in the press and to investigate students’ reactions towards four newspaper science articles. These articles deal with different aspects of science and reflect different communication strategies. The results of the research reveal that secondary school students view the communication codes used in press science articles as being more interesting and comprehensible than those of their science textbooks. Predominantly, they do not select science articles that present their data in a scientific way (diagrams and abstract graphs). On the contrary, they do select science articles and passages in them, which use an emotional/‘poetic’ language with a lot of metaphors and analogies to introduce complex science concepts. It also seems that the narrative elements found in popularized science articles attract students’ interest and motivate them towards further reading.

  10. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    Computational Fluid Dynamics (CFD) has considerably evolved in the last decade. There are many computer programs that can perform computations on viscous internal or external flows with chemical reactions. CFD has become a commonly used tool in the design and analysis of gas turbines, ramjet combustors, turbo-machinery, inlet ducts, rocket engines, jet interaction, missile, and ramjet nozzles. One of the problems of interest to NASA has always been the performance prediction for rocket and air-breathing engines. Due to the complexity of flow in these engines it is necessary to resolve the flowfield into a fine mesh to capture quantities like turbulence and heat transfer. However, calculation on a high-resolution grid is associated with a prohibitively increasing computational time that can downgrade the value of the CFD for practical engineering calculations. The Liquid Thrust Chamber Performance (LTCP) code was developed for NASA/MSFC (Marshall Space Flight Center) to perform liquid rocket engine performance calculations. This code is a 2D/axisymmetric full Navier-Stokes (NS) solver with fully coupled finite rate chemistry and Eulerian treatment of liquid fuel and/or oxidizer droplets. One of the advantages of this code has been the resemblance of its input file to the JANNAF (Joint Army Navy NASA Air Force Interagency Propulsion Committee) standard TDK code, and its automatic grid generation for JANNAF defined combustion chamber wall geometry. These options minimize the learning effort for TDK users, and make the code a good candidate for performing engineering calculations. Although the LTCP code was developed for liquid rocket engines, it is a general-purpose code and has been used for solving many engineering problems. However, the single zone formulation of the LTCP has limited the code to be applicable to problems with complex geometry. Furthermore, the computational time becomes prohibitively large for high-resolution problems with chemistry, two

  11. Code dependencies of pre-supernova evolution and nucleosynthesis in massive stars: evolution to the end of core helium burning

    DOE PAGES

    Jones, S.; Hirschi, R.; Pignatari, M.; ...

    2015-01-15

    We present a comparison of 15M ⊙ , 20M ⊙ and 25M ⊙ stellar models from three different codes|GENEC, KEPLER and MESA|and their nucleosynthetic yields. The models are calculated from the main sequence up to the pre-supernova (pre-SN) stage and do not include rotation. The GENEC and KEPLER models hold physics assumptions that are characteristic of the two codes. The MESA code is generally more flexible; overshooting of the convective core during the hydrogen and helium burning phases in MESA is chosen such that the CO core masses are consistent with those in the GENEC models. Full nucleosynthesis calculations aremore » performed for all models using the NuGrid post-processing tool MPPNP and the key energy-generating nuclear reaction rates are the same for all codes. We are thus able to highlight the key diferences between the models that are caused by the contrasting physics assumptions and numerical implementations of the three codes. A reasonable agreement is found between the surface abundances predicted by the models computed using the different codes, with GENEC exhibiting the strongest enrichment of H-burning products and KEPLER exhibiting the weakest. There are large variations in both the structure and composition of the models—the 15M ⊙ and 20M ⊙ in particular—at the pre-SN stage from code to code caused primarily by convective shell merging during the advanced stages. For example the C-shell abundances of O, Ne and Mg predicted by the three codes span one order of magnitude in the 15M ⊙ models. For the alpha elements between Si and Fe the differences are even larger. The s-process abundances in the C shell are modified by the merging of convective shells; the modification is strongest in the 15M ⊙ model in which the C-shell material is exposed to O-burning temperatures and the γ -process is activated. The variation in the s-process abundances across the codes is smallest in the 25M ⊙ models, where it is comparable to the impact of

  12. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  13. a Study on 4 Reactions Forming 46Ti*

    NASA Astrophysics Data System (ADS)

    Cicerchia, M.; Marchi, T.; Gramegna, F.; Cinausero, M.; Fabris, D.; Mantovani, G.; Degerlier, M.; Morelli, L.; Bruno, M.; DAgostino, M.; Frosin, C.; Barlini, S.; Piantelli, S.; Valdrè, S.; Bini, M.; Pasquali, G.; Casini, G.; Pastore, G.; Gruyer, D.; Ottanelli, P.; Camaiani, A.; Gelli, N.; Olmi, A.; Poggi, G.; Lombardo, I.; Dell'Aquila, D.; Cieplicka-Orynczak, N.

    2018-02-01

    The NUCL-EX collaboration is carrying out an extensive research program on preequilibrium emission of light charged particles from hot nuclei. The ultimate goal is to study how cluster structures affect nuclear reactions [1,2,3,4]. Indeed, a strong correlation between nuclear structure and reaction dynamics emerges when some nucleons or clusters of nucleons are emitted or captured [5]. At this purpose, the four reactions 16O+30Si, 16O+30Si, 18O+28Si and 19F +27Al have been measured at about 120 MeV projectile energy. Experimental data were collected at Legnaro National Laboratories, using the GARFIELD+RCo array, fully equipped with digital electronics [6]. Following an initial identification of particles and the energy calibration procedures, the complete analysis is being performed on an event-by-event basis. Experimental data are then compared to the theoretical predictions where events are generated by numerical codes based on pre-equilibrium and statistical models and then filtered through a software replica of the setup. Differences between the experimental data and the predicted data put into evidence effects related to the entrance channel and to the cluster nature of the colliding ions. After a general introduction on the experimental campaign, this contribution will focus on the preliminary results obtained so far.

  14. A reactive flow model with coupled reaction kinetics for detonation and combustion in non-ideal explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P.J.

    1996-07-01

    A new reactive flow model for highly non-ideal explosives and propellants is presented. These compositions, which contain large amounts of metal, upon explosion have reaction kinetics that are characteristic of both fast detonation and slow metal combustion chemistry. A reaction model for these systems was incorporated into the two-dimensional, finite element, Lagrangian hydrodynamic code, DYNA2D. A description of how to determine the model parameters is given. The use of the model and variations are applied to AP, Al, and nitramine underwater explosive and propellant systems.

  15. Ten-year review reveals changing trends and severity of allergic reactions to nuts and other foods.

    PubMed

    Johnson, Jennifer; Malinovschi, Andrei; Alving, Kjell; Lidholm, Jonas; Borres, Magnus P; Nordvall, Lennart

    2014-08-01

    Over the past few decades, the incidence of food allergies has risen and Sweden has increased its import of peanuts and exotic nuts, such as cashew nuts, which may cause severe allergic reactions. This study aimed to retrospectively investigate paediatric emergency visits due to food reactions over a 10-year period, focusing on reactions to peanuts and tree nuts. Emergency visits to Uppsala University Children's Hospital, Sweden, between September 2001 and December 2010, were reviewed, and cases containing diagnostic codes for anaphylaxis, allergic reactions or allergy and hypersensitivity not caused by drugs or biological substances were retrieved. We analysed 703 emergency visits made by 578 individuals with food allergies. Peanuts and tree nuts accounted for 50% of the food allergies and were more frequently associated with adrenaline treatment and hospitalisation than other foods. Cashew nut reactions increased over the study period, and together with peanuts, they were responsible for more anaphylactic reactions than hazelnuts. Peanut and tree nut reactions were more likely to result in adrenaline treatment and hospitalisation than other food reactions. Peanut and cashew nut reactions were more likely to cause anaphylaxis than hazelnuts. Cashew nut reactions increased during the study period. ©2014 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  16. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  17. Nanoparticle based bio-bar code technology for trace analysis of aflatoxin B1 in Chinese herbs.

    PubMed

    Yu, Yu-Yan; Chen, Yuan-Yuan; Gao, Xuan; Liu, Yuan-Yuan; Zhang, Hong-Yan; Wang, Tong-Ying

    2018-04-01

    A novel and sensitive assay for aflatoxin B1 (AFB1) detection has been developed by using bio-bar code assay (BCA). The method that relies on polyclonal antibodies encoded with DNA modified gold nanoparticle (NP) and monoclonal antibodies modified magnetic microparticle (MMP), and subsequent detection of amplified target in the form of bio-bar code using a fluorescent quantitative polymerase chain reaction (FQ-PCR) detection method. First, NP probes encoded with DNA that was unique to AFB1, MMP probes with monoclonal antibodies that bind AFB1 specifically were prepared. Then, the MMP-AFB1-NP sandwich compounds were acquired, dehybridization of the oligonucleotides on the nanoparticle surface allows the determination of the presence of AFB1 by identifying the oligonucleotide sequence released from the NP through FQ-PCR detection. The bio-bar code techniques system for detecting AFB1 was established, and the sensitivity limit was about 10 -8  ng/mL, comparable ELISA assays for detecting the same target, it showed that we can detect AFB1 at low attomolar levels with the bio-bar-code amplification approach. This is also the first demonstration of a bio-bar code type assay for the detection of AFB1 in Chinese herbs. Copyright © 2017. Published by Elsevier B.V.

  18. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  19. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  20. Code Team Training: Demonstrating Adherence to AHA Guidelines During Pediatric Code Blue Activations.

    PubMed

    Stewart, Claire; Shoemaker, Jamie; Keller-Smith, Rachel; Edmunds, Katherine; Davis, Andrew; Tegtmeyer, Ken

    2017-10-16

    Pediatric code blue activations are infrequent events with a high mortality rate despite the best effort of code teams. The best method for training these code teams is debatable; however, it is clear that training is needed to assure adherence to American Heart Association (AHA) Resuscitation Guidelines and to prevent the decay that invariably occurs after Pediatric Advanced Life Support training. The objectives of this project were to train a multidisciplinary, multidepartmental code team and to measure this team's adherence to AHA guidelines during code simulation. Multidisciplinary code team training sessions were held using high-fidelity, in situ simulation. Sessions were held several times per month. Each session was filmed and reviewed for adherence to 5 AHA guidelines: chest compression rate, ventilation rate, chest compression fraction, use of a backboard, and use of a team leader. After the first study period, modifications were made to the code team including implementation of just-in-time training and alteration of the compression team. Thirty-eight sessions were completed, with 31 eligible for video analysis. During the first study period, 1 session adhered to all AHA guidelines. During the second study period, after alteration of the code team and implementation of just-in-time training, no sessions adhered to all AHA guidelines; however, there was an improvement in percentage of sessions adhering to ventilation rate and chest compression rate and an improvement in median ventilation rate. We present a method for training a large code team drawn from multiple hospital departments and a method of assessing code team performance. Despite subjective improvement in code team positioning, communication, and role completion and some improvement in ventilation rate and chest compression rate, we failed to consistently demonstrate improvement in adherence to all guidelines.

  1. College Students' Reactions to Participating in Relational Trauma Research: A Mixed Methodological Study.

    PubMed

    Edwards, Katie M; Neal, Angela M; Dardis, Christina M; Kelley, Erika L; Gidycz, Christine A; Ellis, Gary

    2015-08-24

    Using a mixed methodology, the present study compared men's and women's perceived benefits and emotional reactions with participating in research that inquired about child maltreatment and intimate partner violence (IPV) victimization and perpetration. Participants consisted of 703 college students (357 women, 346 men), ages 18 to 25 who reported on their childhood maltreatment, adolescent and adult IPV victimization and perpetration, and their reactions (perceived benefits and emotional effects) to participating. Participants' reactions to participating were assessed using quantitative scales, as well as open-ended written responses that were content coded by researchers. Women reported more personal benefits from research, whereas men and women reported similar levels of emotional reactions to research participation. Furthermore, greater frequencies of child maltreatment and IPV victimization were related to higher levels of emotional reactions. Common self-identified reasons for emotional reactions (e.g., not liking to think about abuse in general, personal victimization experiences) and benefits (e.g., reflection and awareness about oneself, learning about IPV) were also presented and analyzed. These data underscore the importance of future research that examines the behavioral impact of research participation utilizing longitudinal and in-depth qualitative methodologies. Findings also highlight the potential psychoeducational value of research on understanding the reasons underlying participants' benefits and emotional effects. © The Author(s) 2015.

  2. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  3. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  4. Coding for urologic office procedures.

    PubMed

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Peptide code-on-a-microplate for protease activity analysis via MALDI-TOF mass spectrometric quantitation.

    PubMed

    Hu, Junjie; Liu, Fei; Ju, Huangxian

    2015-04-21

    A peptide-encoded microplate was proposed for MALDI-TOF mass spectrometric (MS) analysis of protease activity. The peptide codes were designed to contain a coding region and the substrate of protease for enzymatic cleavage, respectively, and an internal standard method was proposed for the MS quantitation of the cleavage products of these peptide codes. Upon the cleavage reaction in the presence of target proteases, the coding regions were released from the microplate, which were directly quantitated by using corresponding peptides with one-amino acid difference as the internal standards. The coding region could be used as the unique "Protease ID" for the identification of corresponding protease, and the amount of the cleavage product was used for protease activity analysis. Using trypsin and chymotrypsin as the model proteases to verify the multiplex protease assay, the designed "Trypsin ID" and "Chymotrypsin ID" occurred at m/z 761.6 and 711.6. The logarithm value of the intensity ratio of "Protease ID" to internal standard was proportional to trypsin and chymotrypsin concentration in a range from 5.0 to 500 and 10 to 500 nM, respectively. The detection limits for trypsin and chymotrypsin were 2.3 and 5.2 nM, respectively. The peptide-encoded microplate showed good selectivity. This proposed method provided a powerful tool for convenient identification and activity analysis of multiplex proteases.

  6. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  7. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  8. Critical Care Coding for Neurologists.

    PubMed

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  9. FERRET data analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.

  10. Spallation neutron production and the current intra-nuclear cascade and transport codes

    NASA Astrophysics Data System (ADS)

    Filges, D.; Goldenbaum, F.; Enke, M.; Galin, J.; Herbach, C.-M.; Hilscher, D.; Jahnke, U.; Letourneau, A.; Lott, B.; Neef, R.-D.; Nünighoff, K.; Paul, N.; Péghaire, A.; Pienkowski, L.; Schaal, H.; Schröder, U.; Sterzenbach, G.; Tietze, A.; Tishchenko, V.; Toke, J.; Wohlmuther, M.

    A recent renascent interest in energetic proton-induced production of neutrons originates largely from the inception of projects for target stations of intense spallation neutron sources, like the planned European Spallation Source (ESS), accelerator-driven nuclear reactors, nuclear waste transmutation, and also from the application for radioactive beams. In the framework of such a neutron production, of major importance is the search for ways for the most efficient conversion of the primary beam energy into neutron production. Although the issue has been quite successfully addressed experimentally by varying the incident proton energy for various target materials and by covering a huge collection of different target geometries --providing an exhaustive matrix of benchmark data-- the ultimate challenge is to increase the predictive power of transport codes currently on the market. To scrutinize these codes, calculations of reaction cross-sections, hadronic interaction lengths, average neutron multiplicities, neutron multiplicity and energy distributions, and the development of hadronic showers are confronted with recent experimental data of the NESSI collaboration. Program packages like HERMES, LCS or MCNPX master the prevision of reaction cross-sections, hadronic interaction lengths, averaged neutron multiplicities and neutron multiplicity distributions in thick and thin targets for a wide spectrum of incident proton energies, geometrical shapes and materials of the target generally within less than 10% deviation, while production cross-section measurements for light charged particles on thin targets point out that appreciable distinctions exist within these models.

  11. Certifying Auto-Generated Flight Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen

    2008-01-01

    Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm

  12. Status of the R-matrix Code AMUR toward a consistent cross-section evaluation and covariance analysis for the light nuclei

    NASA Astrophysics Data System (ADS)

    Kunieda, Satoshi

    2017-09-01

    We report the status of the R-matrix code AMUR toward consistent cross-section evaluation and covariance analysis for the light-mass nuclei. The applicable limit of the code is extended by including computational capability for the charged-particle elastic scattering cross-sections and the neutron capture cross-sections as example results are shown in the main texts. A simultaneous analysis is performed on the 17O compound system including the 16O(n,tot) and 13C(α,n)16O reactions together with the 16O(n,n) and 13C(α,α) scattering cross-sections. It is found that a large theoretical background is required for each reaction process to obtain a simultaneous fit with all the experimental cross-sections we analyzed. Also, the hard-sphere radii should be assumed to be different from the channel radii. Although these are technical approaches, we could learn roles and sources of the theoretical background in the standard R-matrix.

  13. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  14. QR Code Mania!

    ERIC Educational Resources Information Center

    Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik

    2013-01-01

    space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…

  15. Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom

    ERIC Educational Resources Information Center

    Lin, Zheng

    2012-01-01

    This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…

  16. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  17. Preliminary Assessment of Turbomachinery Codes

    NASA Technical Reports Server (NTRS)

    Mazumder, Quamrul H.

    2007-01-01

    This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.

  18. Industrial Computer Codes

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1996-01-01

    This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.

  19. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  20. Generic reactive transport codes as flexible tools to integrate soil organic matter degradation models with water, transport and geochemistry in soils

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik; Gérard, Fréderic; Mayer, Uli; Simunek, Jirka; Leterme, Bertrand

    2016-04-01

    A large number of organic matter degradation, CO2 transport and dissolved organic matter models have been developed during the last decades. However, organic matter degradation models are in many cases strictly hard-coded in terms of organic pools, degradation kinetics and dependency on environmental variables. The scientific input of the model user is typically limited to the adjustment of input parameters. In addition, the coupling with geochemical soil processes including aqueous speciation, pH-dependent sorption and colloid-facilitated transport are not incorporated in many of these models, strongly limiting the scope of their application. Furthermore, the most comprehensive organic matter degradation models are combined with simplified representations of flow and transport processes in the soil system. We illustrate the capability of generic reactive transport codes to overcome these shortcomings. The formulations of reactive transport codes include a physics-based continuum representation of flow and transport processes, while biogeochemical reactions can be described as equilibrium processes constrained by thermodynamic principles and/or kinetic reaction networks. The flexibility of these type of codes allows for straight-forward extension of reaction networks, permits the inclusion of new model components (e.g.: organic matter pools, rate equations, parameter dependency on environmental conditions) and in such a way facilitates an application-tailored implementation of organic matter degradation models and related processes. A numerical benchmark involving two reactive transport codes (HPx and MIN3P) demonstrates how the process-based simulation of transient variably saturated water flow (Richards equation), solute transport (advection-dispersion equation), heat transfer and diffusion in the gas phase can be combined with a flexible implementation of a soil organic matter degradation model. The benchmark includes the production of leachable organic matter

  1. Analysis of Aeroheating Augmentation due to Reaction Control System Jets on Orion Crew Exploration Vehicle

    NASA Technical Reports Server (NTRS)

    Dyakonov, Artem A.; Buck, Gregory M.; Decaro, Anthony D.

    2009-01-01

    The analysis of effects of the reaction control system jet plumes on aftbody heating of Orion entry capsule is presented. The analysis covered hypersonic continuum part of the entry trajectory. Aerothermal environments at flight conditions were evaluated using Langley Aerothermal Upwind Relaxation Algorithm (LAURA) code and Data Parallel Line Relaxation (DPLR) algorithm code. Results show a marked augmentation of aftbody heating due to roll, yaw and aft pitch thrusters. No significant augmentation is expected due to forward pitch thrusters. Of the conditions surveyed the maximum heat rate on the aftshell is expected when firing a pair of roll thrusters at a maximum deceleration condition.

  2. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  3. Iterated reaction graphs: simulating complex Maillard reaction pathways.

    PubMed

    Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W

    2001-01-01

    This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.

  4. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  5. Light-ion Production from O, Si, Fe and Bi Induced by 175 MeV Quasi-monoenergetic Neutrons

    NASA Astrophysics Data System (ADS)

    Bevilacqua, R.; Pomp, S.; Jansson, K.; Gustavsson, C.; Österlund, M.; Simutkin, V.; Hayashi, M.; Hirayama, S.; Naitou, Y.; Watanabe, Y.; Hjalmarsson, A.; Prokofiev, A.; Tippawan, U.; Lecolley, F.-R.; Marie, N.; Leray, S.; David, J.-C.; Mashnik, S.

    2014-05-01

    We have measured double-differential cross sections in the interaction of 175 MeV quasi-monoenergetic neutrons with O, Si, Fe and Bi. We have compared these results with model calculations with INCL4.5-Abla07, MCNP6 and TALYS-1.2. We have also compared our data with PHITS calculations, where the pre-equilibrium stage of the reaction was accounted respectively using the JENDL/HE-2007 evaluated data library, the quantum molecular dynamics model (QMD) and a modified version of QMD (MQMD) to include a surface coalescence model. The most crucial aspect is the formation and emission of composite particles in the pre-equilibrium stage.

  6. Interface requirements to couple thermal hydraulics codes to severe accident codes: ICARE/CATHARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camous, F.; Jacq, F.; Chatelard, P.

    1997-07-01

    In order to describe with the same code the whole sequence of severe LWR accidents, up to the vessel failure, the Institute of Protection and Nuclear Safety has performed a coupling of the severe accident code ICARE2 to the thermalhydraulics code CATHARE2. The resulting code, ICARE/CATHARE, is designed to be as pertinent as possible in all the phases of the accident. This paper is mainly devoted to the description of the ICARE2-CATHARE2 coupling.

  7. NASA Rotor 37 CFD Code Validation: Glenn-HT Code

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2010-01-01

    In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.

  8. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... for Residential Construction in High Wind Regions. ICC 700: National Green Building Standard The..., coordinated, and necessary to regulate the built environment. Federal agencies frequently use these codes and... International Codes and Standards consist of the following: ICC Codes International Building Code. International...

  9. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-16

    ... for Residential Construction in High Wind Areas. ICC 700: National Green Building Standard. The... Codes and Standards that are comprehensive, coordinated, and necessary to regulate the built environment... International Codes and Standards consist of the following: ICC Codes International Building Code. International...

  10. Impacts of Model Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less

  11. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2017-12-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a) = s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  12. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2018-03-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a)=s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  13. Microdosimetric evaluation of the neutron field for BNCT at Kyoto University reactor by using the PHITS code.

    PubMed

    Baba, H; Onizuka, Y; Nakao, M; Fukahori, M; Sato, T; Sakurai, Y; Tanaka, H; Endo, S

    2011-02-01

    In this study, microdosimetric energy distributions of secondary charged particles from the (10)B(n,α)(7)Li reaction in boron-neutron capture therapy (BNCT) field were calculated using the Particle and Heavy Ion Transport code System (PHITS). The PHITS simulation was performed to reproduce the geometrical set-up of an experiment that measured the microdosimetric energy distributions at the Kyoto University Reactor where two types of tissue-equivalent proportional counters were used, one with A-150 wall alone and another with a 50-ppm-boron-loaded A-150 wall. It was found that the PHITS code is a useful tool for the simulation of the energy deposited in tissue in BNCT based on the comparisons with experimental results.

  14. Towers of generalized divisible quantum codes

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  15. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    NASA Astrophysics Data System (ADS)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  16. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1984-01-01

    Several error control coding techniques for reliable satellite communications were investigated to find algorithms for fast decoding of Reed-Solomon codes in terms of dual basis. The decoding of the (255,223) Reed-Solomon code, which is used as the outer code in the concatenated TDRSS decoder, was of particular concern.

  17. Expanding the genetic code for site-specific labelling of tobacco mosaic virus coat protein and building biotin-functionalized virus-like particles.

    PubMed

    Wu, F C; Zhang, H; Zhou, Q; Wu, M; Ballard, Z; Tian, Y; Wang, J Y; Niu, Z W; Huang, Y

    2014-04-18

    A method for site-specific and high yield modification of tobacco mosaic virus coat protein (TMVCP) utilizing a genetic code expanding technology and copper free cycloaddition reaction has been established, and biotin-functionalized virus-like particles were built by the self-assembly of the protein monomers.

  18. Visual search asymmetries within color-coded and intensity-coded displays.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  19. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    PubMed

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  20. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  1. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.

  2. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  3. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...

  4. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...

  5. [Visits of patients with exertional rhabdomyolysis to the Emergency Department at Landspítali, The National University Hospital of Iceland in the years 2008-2012].

    PubMed

    Halldorsson, Arnljotur Bjorn; Benedikz, Elisabet; Olafsson, Isleifur; Mogensen, Brynjolfur

    2016-03-01

    Overexertion and too much training are among the -multiple etiologies of rhabdomyolysis. Creatine kinase (CK) and myo-globine, released from skeletal muscle cells, are useful for diagnosis and follow-up. Acute kidney injury is a serious complication of myoglobinemia. Literature on exertional rhabdomyolysis in the general population is scarce. The aim of this study was to investigate the epidemiology of exertional rhabdomyolysis among patients diagnosed at Landspítali The National University Hospital of Iceland in 2008-2012. The study was retrospective and observational. All patients presenting with muscle pain after exertion and elevated creatine kinase >1000 IU/L, during the period from 1 January 2008 to 31 December 2012, were included. Patients with CK elevations secondary to causes other than exertion were excluded. Variables included: patient number and gender, CK-levels, date of hospital admission, cause of rhabdomyolysis, location of injured muscle groups, length of hospital stay, complications and means of fluid replacement. Population figures of the capital region were gathered from Statistics Iceland and information on sport practice in the capital region from The National Olympic and Sports Association of Iceland. Exertional rhabdomyolysis was diagnosed in 54 patients, 18 females (33,3%) and 36 males (66,7%), or 8,3% of rhabdomyolysis cases from all causes in the study period (648 cases). Incidence in the capital region was 5,0/100.000 inhabitants per year in the study period. Median age was 28 years and median CK-level was 24.132 IU/L. CK-levels were higher among females but the difference between genders was not significant. Muscle groups of the upper and lower extremities were most frequently affected (89%). Thirty patients received intravenous fluids. They had significantly higher CK values than other patients. One patient developed acute kidney injury. Information on sport practice and physical training in the capital region was not available

  6. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  7. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  8. PHITS-2.76, Particle and Heavy Ion Transport code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-08-01

    Version 03 PHITS can deal with the transport of almost all particles (nucleons, nuclei, mesons, photons, and electrons) over wide energy ranges, using several nuclear reaction models and nuclear data libraries. Geometrical configuration of the simulation can be set with GG (General Geometry) or CG (Combinatorial Geometry). Various quantities such as heat deposition, track length and production yields can be deduced from the simulation, using implemented estimator functions called "tally". The code also has a function to draw 2D and 3D figures of the calculated results as well as the setup geometries, using a code ANGEL. The physical processes includedmore » in PHITS can be divided into two categories, transport process and collision process. In the transport process, PHITS can simulate motion of particles under external fields such as magnetic and gravity. Without the external fields, neutral particles move along a straight trajectory with constant energy up to the next collision point. However, charge particles interact many times with electrons in the material losing energy and changing direction. PHITS treats ionization processes not as collision but as a transport process, using the continuous-slowing-down approximation. The average stopping power is given by the charge density of the material and the momentum of the particle taking into account the fluctuations of the energy loss and the angular deviation. In the collision process, PHITS can simulate the elastic and inelastic interactions as well as decay of particles. The total reaction cross section, or the life time of the particle is an essential quantity in the determination of the mean free path of the transport particle. According to the mean free path, PHITS chooses the next collision point using the Monte Carlo method. To generate the secondary particles of the collision, we need the information of the final states of the collision. For neutron induced reactions in low energy region, PHITS employs

  9. Clustering mechanism of oxocarboxylic acids involving hydration reaction: Implications for the atmospheric models

    NASA Astrophysics Data System (ADS)

    Liu, Ling; Kupiainen-Määttä, Oona; Zhang, Haijie; Li, Hao; Zhong, Jie; Kurtén, Theo; Vehkamäki, Hanna; Zhang, Shaowen; Zhang, Yunhong; Ge, Maofa; Zhang, Xiuhui; Li, Zesheng

    2018-06-01

    The formation of atmospheric aerosol particles from condensable gases is a dominant source of particulate matter in the boundary layer, but the mechanism is still ambiguous. During the clustering process, precursors with different reactivities can induce various chemical reactions in addition to the formation of hydrogen bonds. However, the clustering mechanism involving chemical reactions is rarely considered in most of the nucleation process models. Oxocarboxylic acids are common compositions of secondary organic aerosol, but the role of oxocarboxylic acids in secondary organic aerosol formation is still not fully understood. In this paper, glyoxylic acid, the simplest and the most abundant atmospheric oxocarboxylic acid, has been selected as a representative example of oxocarboxylic acids in order to study the clustering mechanism involving hydration reactions using density functional theory combined with the Atmospheric Clusters Dynamic Code. The hydration reaction of glyoxylic acid can occur either in the gas phase or during the clustering process. Under atmospheric conditions, the total conversion ratio of glyoxylic acid to its hydration reaction product (2,2-dihydroxyacetic acid) in both gas phase and clusters can be up to 85%, and the product can further participate in the clustering process. The differences in cluster structures and properties induced by the hydration reaction lead to significant differences in cluster formation rates and pathways at relatively low temperatures.

  10. Role of (n,2n) reactions in transmutation of long-lived fission products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apse, V. A.; Kulikov, G. G., E-mail: ggkulikov@mephi.ru; Kulikov, E. G.

    2016-12-15

    The conditions under which (n,γ) and (n,2n) reactions can help or hinder each other in neutron transmutation of long-lived fission products (LLFPs) are considered. Isotopic and elemental transmutation for the main long-lived fission products, {sup 79}Se, {sup 93}Zr, {sup 99}Tc, {sup 107}Pd, {sup 126}Sn, {sup 129}I, and {sup 135}Cs, are considered. The effect of (n,2n) reactions on the equilibrium amount of nuclei of the transmuted isotope and the neutron consumption required for the isotope processing is estimated. The aim of the study is to estimate the influence of (n,2n) reactions on efficiency of neutron LLFP transmutation. The code TIME26 andmore » the libraries of evaluated nuclear data ABBN-93, JEF-PC, and JANIS system are applied. The following results are obtained: (1) The effect of (n,2n) reactions on the minimum number of neutrons required for transmutation and the equilibrium amount of LLFP nuclei is estimated. (2) It is demonstrated that, for three LLFP isotopes ({sup 126}Sn, {sup 129}I, and {sup 135}Cs), (n,γ) and (n,2n) reactions are partners facilitating neutron transmutation. The strongest effect of (n,2n) reaction is found for {sup 126}Sn transmutation (reduction of the neutron consumption by 49% and the equilibrium amount of nuclei by 19%).« less

  11. Superluminal Labview Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheat, Robert; Marksteiner, Quinn; Quenzer, Jonathan

    2012-03-26

    This labview code is used to set the phase and amplitudes on the 72 antenna of the superluminal machine, and to map out the radiation patter from the superluminal antenna.Each antenna radiates a modulated signal consisting of two separate frequencies, in the range of 2 GHz to 2.8 GHz. The phases and amplitudes from each antenna are controlled by a pair of AD8349 vector modulators (VMs). These VMs set the phase and amplitude of a high frequency signal using a set of four DC inputs, which are controlled by Linear Technologies LTC1990 digital to analog converters (DACs). The labview codemore » controls these DACs through an 8051 microcontroller.This code also monitors the phases and amplitudes of the 72 channels. Near each antenna, there is a coupler that channels a portion of the power into a binary network. Through a labview controlled switching array, any of the 72 coupled signals can be channeled in to the Tektronix TDS 7404 digital oscilloscope. Then the labview code takes an FFT of the signal, and compares it to the FFT of a reference signal in the oscilloscope to determine the magnitude and phase of each sideband of the signal. The code compensates for phase and amplitude errors introduced by differences in cable lengths.The labview code sets each of the 72 elements to a user determined phase and amplitude. For each element, the code runs an iterative procedure, where it adjusts the DACs until the correct phases and amplitudes have been reached.« less

  12. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  13. Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burk, K.W.; Andrews, G.L.

    1989-02-01

    The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less

  14. Energy Codes at a Glance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Pamala C.; Richman, Eric E.

    2008-09-01

    Feeling dim from energy code confusion? Read on to give your inspections a charge. The U.S. Department of Energy’s Building Energy Codes Program addresses hundreds of inquiries from the energy codes community every year. This article offers clarification for topics of confusion submitted to BECP Technical Support of interest to electrical inspectors, focusing on the residential and commercial energy code requirements based on the most recently published 2006 International Energy Conservation Code® and ANSI/ASHRAE/IESNA1 Standard 90.1-2004.

  15. Bring out your codes! Bring out your codes! (Increasing Software Visibility and Re-use)

    NASA Astrophysics Data System (ADS)

    Allen, A.; Berriman, B.; Brunner, R.; Burger, D.; DuPrie, K.; Hanisch, R. J.; Mann, R.; Mink, J.; Sandin, C.; Shortridge, K.; Teuben, P.

    2013-10-01

    Progress is being made in code discoverability and preservation, but as discussed at ADASS XXI, many codes still remain hidden from public view. With the Astrophysics Source Code Library (ASCL) now indexed by the SAO/NASA Astrophysics Data System (ADS), the introduction of a new journal, Astronomy & Computing, focused on astrophysics software, and the increasing success of education efforts such as Software Carpentry and SciCoder, the community has the opportunity to set a higher standard for its science by encouraging the release of software for examination and possible reuse. We assembled representatives of the community to present issues inhibiting code release and sought suggestions for tackling these factors. The session began with brief statements by panelists; the floor was then opened for discussion and ideas. Comments covered a diverse range of related topics and points of view, with apparent support for the propositions that algorithms should be readily available, code used to produce published scientific results should be made available, and there should be discovery mechanisms to allow these to be found easily. With increased use of resources such as GitHub (for code availability), ASCL (for code discovery), and a stated strong preference from the new journal Astronomy & Computing for code release, we expect to see additional progress over the next few years.

  16. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  17. Civil Code, 11 December 1987.

    PubMed

    1988-01-01

    Article 162 of this Mexican Code provides, among other things, that "Every person has the right freely, responsibly, and in an informed fashion to determine the number and spacing of his or her children." When a marriage is involved, this right is to be observed by the spouses "in agreement with each other." The civil codes of the following states contain the same provisions: 1) Baja California (Art. 159 of the Civil Code of 28 April 1972 as revised in Decree No. 167 of 31 January 1974); 2) Morelos (Art. 255 of the Civil Code of 26 September 1949 as revised in Decree No. 135 of 29 December 1981); 3) Queretaro (Art. 162 of the Civil Code of 29 December 1950 as revised in the Act of 9 January 1981); 4) San Luis Potosi (Art. 147 of the Civil Code of 24 March 1946 as revised in 13 June 1978); Sinaloa (Art. 162 of the Civil Code of 18 June 1940 as revised in Decree No. 28 of 14 October 1975); 5) Tamaulipas (Art. 146 of the Civil Code of 21 November 1960 as revised in Decree No. 20 of 30 April 1975); 6) Veracruz-Llave (Art. 98 of the Civil Code of 1 September 1932 as revised in the Act of 30 December 1975); and 7) Zacatecas (Art. 253 of the Civil Code of 9 February 1965 as revised in Decree No. 104 of 13 August 1975). The Civil Codes of Puebla and Tlaxcala provide for this right only in the context of marriage with the spouses in agreement. See Art. 317 of the Civil Code of Puebla of 15 April 1985 and Article 52 of the Civil Code of Tlaxcala of 31 August 1976 as revised in Decree No. 23 of 2 April 1984. The Family Code of Hidalgo requires as a formality of marriage a certification that the spouses are aware of methods of controlling fertility, responsible parenthood, and family planning. In addition, Article 22 the Civil Code of the Federal District provides that the legal capacity of natural persons is acquired at birth and lost at death; however, from the moment of conception the individual comes under the protection of the law, which is valid with respect to the

  18. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  19. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    PubMed

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  20. A DAG Scheduling Scheme on Heterogeneous Computing Systems Using Tuple-Based Chemical Reaction Optimization

    PubMed Central

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977

  1. A DAG scheduling scheme on heterogeneous computing systems using tuple-based chemical reaction optimization.

    PubMed

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.

  2. The production of radionuclides for nuclear medicine from a compact, low-energy accelerator system.

    PubMed

    Webster, William D; Parks, Geoffrey T; Titov, Dmitry; Beasley, Paul

    2014-05-01

    The field of nuclear medicine is reliant on radionuclides for medical imaging procedures and radioimmunotherapy (RIT). The recent shut-downs of key radionuclide producers have highlighted the fragility of the current radionuclide supply network, however. To ensure that nuclear medicine can continue to grow, adding new diagnostic and therapy options to healthcare, novel and reliable production methods are required. Siemens are developing a low-energy, high-current - up to 10 MeV and 1 mA respectively - accelerator. The capability of this low-cost, compact system for radionuclide production, for use in nuclear medicine procedures, has been considered. The production of three medically important radionuclides - (89)Zr, (64)Cu, and (103)Pd - has been considered, via the (89)Y(p,n), (64)Ni(p,n) and (103)Rh(p,n) reactions, respectively. Theoretical cross-sections were generated using TALYS and compared to experimental data available from EXFOR. Stopping power values generated by SRIM have been used, with the TALYS-generated excitation functions, to calculate potential yields and isotopic purity in different irradiation regimes. The TALYS excitation functions were found to have a good agreement with the experimental data available from the EXFOR database. It was found that both (89)Zr and (64)Cu could be produced with high isotopic purity (over 99%), with activity yields suitable for medical diagnostics and therapy, at a proton energy of 10MeV. At 10MeV, the irradiation of (103)Rh produced appreciable quantities of (102)Pd, reducing the isotopic purity. A reduction in beam energy to 9.5MeV increased the radioisotopic purity to 99% with only a small reduction in activity yield. This work demonstrates that the low-energy, compact accelerator system under development by Siemens would be capable of providing sufficient quantities of (89)Zr, (64)Cu, and (103)Pd for use in medical diagnostics and therapy. It is suggested that the system could be used to produce many other

  3. Code Switching and Code Superimposition in Music. Working Papers in Sociolinguistics, No. 63.

    ERIC Educational Resources Information Center

    Slobin, Mark

    This paper illustrates how the sociolinguistic concept of code switching applies to the use of different styles of music. The two bases for the analogy are Labov's definition of code-switching as "moving from one consistent set of co-occurring rules to another," and the finding of sociolinguistics that code switching tends to be part of…

  4. Ethics and the Early Childhood Educator: Using the NAEYC Code. 2005 Code Edition

    ERIC Educational Resources Information Center

    Freeman, Nancy; Feeney, Stephanie

    2005-01-01

    With updated language and references to the 2005 revision of the Code of Ethical Conduct, this book, like the NAEYC Code of Ethical Conduct, seeks to inform, not prescribe, answers to tough questions that teachers face as they work with children, families, and colleagues. To help everyone become well acquainted with the Code and use it in one's…

  5. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier

  6. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  7. Light elements burning reaction rates at stellar temperatures as deduced by the Trojan Horse measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamia, L.; Spitaleri, C.; La Cognata, M.

    2015-02-24

    Experimental nuclear astrophysics aims at determining the reaction rates for astrophysically relevant reactions at their Gamow energies. For charged-particle induced reactions, the access to these energies is usually hindered, in direct measurements, by the presence of the Coulomb barrier between the interacting particles or by electron screening effects, which make hard the determination of the bare-nucleus S(E)-factor of interest for astrophysical codes. The use of the Trojan Horse Method (THM) appears as one of the most suitable tools for investigating nuclear processes of interest for astrophysics. Here, in view of the recent TH measurements, the main destruction channels for deuteriummore » ({sup 2}H), for the two lithium {sup 6,7}Li isotopes, for the {sup 9}Be and the one for the two boron {sup 10,11}B isotopes will be discussed.« less

  8. Top ten reasons to register your code with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Berriman, G. Bruce; Mink, Jessica D.; Nemiroff, Robert J.; Robitaille, Thomas; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Teuben, Peter J.; Wallin, John F.; Warmels, Rein

    2017-01-01

    With 1,400 codes, the Astrophysics Source Code Library (ASCL, ascl.net) is the largest indexed resource for codes used in astronomy research in existence. This free online registry was established in 1999, is indexed by Web of Science and ADS, and is citable, with citations to its entries tracked by ADS. Registering your code with the ASCL is easy with our online submissions system. Making your software available for examination shows confidence in your research and makes your research more transparent, reproducible, and falsifiable. ASCL registration allows your software to be cited on its own merits and provides a citation that is trackable and accepted by all astronomy journals and journals such as Science and Nature. Registration also allows others to find your code more easily. This presentation covers the benefits of registering astronomy research software with the ASCL.

  9. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  10. Genetic Code Analysis Toolkit: A novel tool to explore the coding properties of the genetic code and DNA sequences

    NASA Astrophysics Data System (ADS)

    Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.

    2018-01-01

    The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/

  11. Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.

    2013-10-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  12. Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, S R; Bihari, B L; Salari, K

    As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.

  13. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  14. Codes and morals: is there a missing link? (The Nuremberg Code revisited).

    PubMed

    Hick, C

    1998-01-01

    Codes are a well known and popular but weak form of ethical regulation in medical practice. There is, however, a lack of research on the relations between moral judgments and ethical Codes, or on the possibility of morally justifying these Codes. Our analysis begins by showing, given the Nuremberg Code, how a typical reference to natural law has historically served as moral justification. We then indicate, following the analyses of H. T. Engelhardt, Jr., and A. MacIntyre, why such general moral justifications of codes must necessarily fail in a society of "moral strangers." Going beyond Engelhardt we argue, that after the genealogical suspicion in morals raised by Nietzsche, not even Engelhardt's "principle of permission" can be rationally justified in a strong sense--a problem of transcendental argumentation in morals already realized by I. Kant. Therefore, we propose to abandon the project of providing general justifications for moral judgements and to replace it with a hermeneutical analysis of ethical meanings in real-world situations, starting with the archetypal ethical situation, the encounter with the Other (E. Levinas).

  15. The neutral emergence of error minimized genetic codes superior to the standard genetic code.

    PubMed

    Massey, Steven E

    2016-11-07

    The standard genetic code (SGC) assigns amino acids to codons in such a way that the impact of point mutations is reduced, this is termed 'error minimization' (EM). The occurrence of EM has been attributed to the direct action of selection, however it is difficult to explain how the searching of alternative codes for an error minimized code can occur via codon reassignments, given that these are likely to be disruptive to the proteome. An alternative scenario is that EM has arisen via the process of genetic code expansion, facilitated by the duplication of genes encoding charging enzymes and adaptor molecules. This is likely to have led to similar amino acids being assigned to similar codons. Strikingly, we show that if during code expansion the most similar amino acid to the parent amino acid, out of the set of unassigned amino acids, is assigned to codons related to those of the parent amino acid, then genetic codes with EM superior to the SGC easily arise. This scheme mimics code expansion via the gene duplication of charging enzymes and adaptors. The result is obtained for a variety of different schemes of genetic code expansion and provides a mechanistically realistic manner in which EM has arisen in the SGC. These observations might be taken as evidence for self-organization in the earliest stages of life. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. NR-code: Nonlinear reconstruction code

    NASA Astrophysics Data System (ADS)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  17. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  18. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Gaarder, N. T.; Lin, S.

    1986-01-01

    This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.

  19. Propagation of Reactions in Thermally-damaged PBX-9501

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tringe, J W; Glascoe, E A; Kercher, J R

    A thermally-initiated explosion in PBX-9501 (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) is observed in situ by flash x-ray imaging, and modeled with the LLNL multi-physics arbitrary-Lagrangian-Eulerian code ALE3D. The containment vessel deformation provides a useful estimate of the reaction pressure at the time of the explosion, which we calculate to be in the range 0.8-1.4 GPa. Closely-coupled ALE3D simulations of these experiments, utilizing the multi-phase convective burn model, provide detailed predictions of the reacted mass fraction and deflagration front acceleration. During the preinitiation heating phase of these experiments, the solid HMX portion of the PBX-9501 undergoes a {beta}-phase to {delta}-phase transition which damages the explosivemore » and induces porosity. The multi-phase convective burn model results demonstrate that damaged particle size and pressure are critical for predicting reaction speed and violence. In the model, energetic parameters are taken from LLNL's thermochemical-kinetics code Cheetah and burn rate parameters from Son et al. (2000). Model predictions of an accelerating deflagration front are in qualitative agreement with the experimental images assuming a mode particle diameter in the range 300-400 {micro}m. There is uncertainty in the initial porosity caused by thermal damage of PBX-9501 and, thus, the effective surface area for burning. To better understand these structures, we employ x-ray computed tomography (XRCT) to examine the microstructure of PBX-9501 before and after thermal damage. Although lack of contrast between grains and binder prevents the determination of full grain size distribution in this material, there are many domains visible in thermally damaged PBX-9501 with diameters in the 300-400 {micro}m range.« less

  20. Bandwidth efficient CCSDS coding standard proposals

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan

    1992-01-01

    The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.

  1. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  2. Nonlinear, nonbinary cyclic group codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.

  3. The Proteus Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Bui, Trong T.; Cavicchi, Richard H.; Conley, Julianne M.; Molls, Frank B.; Schwab, John R.

    1992-01-01

    An effort is currently underway at NASA Lewis to develop two- and three-dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. The emphasis in the development of Proteus is not algorithm development or research on numerical methods, but rather the development of the code itself. The objective is to develop codes that are user-oriented, easily-modified, and well-documented. Well-proven, state-of-the-art solution algorithms are being used. Code readability, documentation (both internal and external), and validation are being emphasized. This paper is a status report on the Proteus development effort. The analysis and solution procedure are described briefly, and the various features in the code are summarized. The results from some of the validation cases that have been run are presented for both the two- and three-dimensional codes.

  4. Serial-data correlator/code translator

    NASA Technical Reports Server (NTRS)

    Morgan, L. E.

    1977-01-01

    System, consisting of sampling flip flop, memory (either RAM or ROM), and memory buffer, correlates sampled data with predetermined acceptance code patterns, translates acceptable code patterns to nonreturn-to-zero code, and identifies data dropouts.

  5. Circular codes revisited: a statistical approach.

    PubMed

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Social priming of hemispatial neglect affects spatial coding: Evidence from the Simon task.

    PubMed

    Arend, Isabel; Aisenberg, Daniela; Henik, Avishai

    2016-10-01

    In the Simon effect (SE), choice reactions are fast if the location of the stimulus and the response correspond when stimulus location is task-irrelevant; therefore, the SE reflects the automatic processing of space. Priming of social concepts was found to affect automatic processing in the Stroop effect. We investigated whether spatial coding measured by the SE can be affected by the observer's mental state. We used two social priming manipulations of impairments: one involving spatial processing - hemispatial neglect (HN) and another involving color perception - achromatopsia (ACHM). In two experiments the SE was reduced in the "neglected" visual field (VF) under the HN, but not under the ACHM manipulation. Our results show that spatial coding is sensitive to spatial representations that are not derived from task-relevant parameters, but from the observer's cognitive state. These findings dispute stimulus-response interference models grounded on the idea of the automaticity of spatial processing. Copyright © 2016. Published by Elsevier Inc.

  7. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  8. Continuous Codes and Standards Improvement (CCSI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivkin, Carl H; Burgess, Robert M; Buttner, William J

    2015-10-21

    As of 2014, the majority of the codes and standards required to initially deploy hydrogen technologies infrastructure in the United States have been promulgated. These codes and standards will be field tested through their application to actual hydrogen technologies projects. Continuous codes and standards improvement (CCSI) is a process of identifying code issues that arise during project deployment and then developing codes solutions to these issues. These solutions would typically be proposed amendments to codes and standards. The process is continuous because as technology and the state of safety knowledge develops there will be a need to monitor the applicationmore » of codes and standards and improve them based on information gathered during their application. This paper will discuss code issues that have surfaced through hydrogen technologies infrastructure project deployment and potential code changes that would address these issues. The issues that this paper will address include (1) setback distances for bulk hydrogen storage, (2) code mandated hazard analyses, (3) sensor placement and communication, (4) the use of approved equipment, and (5) system monitoring and maintenance requirements.« less

  9. Needs and opportunities for CFD-code validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, B.L.

    1996-06-01

    The conceptual design for the ESS target consists of a horizontal cylinder containing a liquid metal - mercury is considered in the present study - which circulates by forced convection and carries away the waste heat generated by the spallation reactions. The protons enter the target via a beam window, which must withstand the thermal, mechanical and radiation loads to which it is subjected. For a beam power of 5MW, it is estimated that about 3.3MW of waste heat would be deposited in the target material and associated structures. it is intended to confirm, by detailed thermal-hydraulics calculations, that amore » convective flow of the liquid metal target material can effectively remove the waste heat. The present series of Computational Fluid Dynamics (CFD) calculations has indicated that a single-inlet Target design leads to excessive local overheating, but a multiple-inlet design, is coolable. With this option, inlet flow streams, two from the sides and one from below, merge over the target window, cooling the window itself in crossflow and carrying away the heat generated volumetrically in the mercury with a strong axial flow down the exit channel. The three intersecting streams form a complex, three-dimensional, swirling flow field in which critical heat transfer processes are taking place. In order to produce trustworthy code simulations, it is necessary that the mesh resolution is adequate for the thermal-hydraulic conditions encountered and that the physical models used by the code are appropriate to the fluid dynamic environment. The former relies on considerable user experience in the application of the code, and the latter assurance is best gained in the context of controlled benchmark activities where measured data are available. Such activities will serve to quantify the accuracy of given models and to identify potential problem area for the numerical simulation which may not be obvious from global heat and mass balance considerations.« less

  10. QR code for medical information uses.

    PubMed

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  11. Lean coding machine. Facilities target productivity and job satisfaction with coding automation.

    PubMed

    Rollins, Genna

    2010-07-01

    Facilities are turning to coding automation to help manage the volume of electronic documentation, streamlining workflow, boosting productivity, and increasing job satisfaction. As EHR adoption increases, computer-assisted coding may become a necessity, not an option.

  12. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles A. Wemple; Joshua J. Cogliati

    2005-04-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random numbermore » generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.« less

  13. Recent developments in multidimensional transport methods for the APOLLO 2 lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zmijarevic, I.; Sanchez, R.

    1995-12-31

    A usual method of preparation of homogenized cross sections for reactor coarse-mesh calculations is based on two-dimensional multigroup transport treatment of an assembly together with an appropriate leakage model and reaction-rate-preserving homogenization technique. The actual generation of assembly spectrum codes based on collision probability methods is capable of treating complex geometries (i.e., irregular meshes of arbitrary shape), thus avoiding the modeling error that was introduced in codes with traditional tracking routines. The power and architecture of current computers allow the treatment of spatial domains comprising several mutually interacting assemblies using fine multigroup structure and retaining all geometric details of interest.more » Increasing safety requirements demand detailed two- and three-dimensional calculations for very heterogeneous problems such as control rod positioning, broken Pyrex rods, irregular compacting of mixed- oxide (MOX) pellets at an MOX-UO{sub 2} interface, and many others. An effort has been made to include accurate multi- dimensional transport methods in the APOLLO 2 lattice code. These include extension to three-dimensional axially symmetric geometries of the general-geometry collision probability module TDT and the development of new two- and three-dimensional characteristics methods for regular Cartesian meshes. In this paper we discuss the main features of recently developed multidimensional methods that are currently being tested.« less

  14. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  15. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  16. Abstract feature codes: The building blocks of the implicit learning system.

    PubMed

    Eberhardt, Katharina; Esser, Sarah; Haider, Hilde

    2017-07-01

    According to the Theory of Event Coding (TEC; Hommel, Müsseler, Aschersleben, & Prinz, 2001), action and perception are represented in a shared format in the cognitive system by means of feature codes. In implicit sequence learning research, it is still common to make a conceptual difference between independent motor and perceptual sequences. This supposedly independent learning takes place in encapsulated modules (Keele, Ivry, Mayr, Hazeltine, & Heuer 2003) that process information along single dimensions. These dimensions have remained underspecified so far. It is especially not clear whether stimulus and response characteristics are processed in separate modules. Here, we suggest that feature dimensions as they are described in the TEC should be viewed as the basic content of modules of implicit learning. This means that the modules process all stimulus and response information related to certain feature dimensions of the perceptual environment. In 3 experiments, we investigated by means of a serial reaction time task the nature of the basic units of implicit learning. As a test case, we used stimulus location sequence learning. The results show that a stimulus location sequence and a response location sequence cannot be learned without interference (Experiment 2) unless one of the sequences can be coded via an alternative, nonspatial dimension (Experiment 3). These results support the notion that spatial location is one module of the implicit learning system and, consequently, that there are no separate processing units for stimulus versus response locations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Long non-coding RNA CASC2 regulates cell biological behaviour through the MAPK signalling pathway in hepatocellular carcinoma.

    PubMed

    Gan, Yuanyuan; Han, Nana; He, Xiaoqin; Yu, Jiajun; Zhang, Meixia; Zhou, Yujie; Liang, Huiling; Deng, Junjian; Zheng, Yongfa; Ge, Wei; Long, Zhixiong; Xu, Ximing

    2017-06-01

    Long non-coding RNAs have previously been demonstrated to play important roles in regulating human diseases, especially cancer. However, the biological functions and molecular mechanisms of long non-coding RNAs in hepatocellular carcinoma have not been extensively studied. The long non-coding RNA CASC2 (cancer susceptibility candidate 2) has been characterised as a tumour suppressor in endometrial cancer and gliomas. However, the role and function of CASC2 in hepatocellular carcinoma remain unknown. In this study, using quantitative real-time polymerase chain reaction, we confirmed that CASC2 expression was downregulated in 50 hepatocellular carcinoma cases (62%) and in hepatocellular carcinoma cell lines compared with the paired adjacent tissues and normal liver cells. In vitro experiments further demonstrated that overexpressed CASC2 decreased hepatocellular carcinoma cell proliferation, migration and invasion as well as promoted apoptosis via inactivating the mitogen-activated protein kinase signalling pathway. Our findings demonstrate that CASC2 could be a useful tumour suppressor factor and a promising therapeutic target for hepatocellular carcinoma.

  18. A multidisciplinary approach to vascular surgery procedure coding improves coding accuracy, work relative value unit assignment, and reimbursement.

    PubMed

    Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres

    2016-08-01

    Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes

  19. A unifying kinetic framework for modeling oxidoreductase-catalyzed reactions.

    PubMed

    Chang, Ivan; Baldi, Pierre

    2013-05-15

    Oxidoreductases are a fundamental class of enzymes responsible for the catalysis of oxidation-reduction reactions, crucial in most bioenergetic metabolic pathways. From their common root in the ancient prebiotic environment, oxidoreductases have evolved into diverse and elaborate protein structures with specific kinetic properties and mechanisms adapted to their individual functional roles and environmental conditions. While accurate kinetic modeling of oxidoreductases is thus important, current models suffer from limitations to the steady-state domain, lack empirical validation or are too specialized to a single system or set of conditions. To address these limitations, we introduce a novel unifying modeling framework for kinetic descriptions of oxidoreductases. The framework is based on a set of seven elementary reactions that (i) form the basis for 69 pairs of enzyme state transitions for encoding various specific microscopic intra-enzyme reaction networks (micro-models), and (ii) lead to various specific macroscopic steady-state kinetic equations (macro-models) via thermodynamic assumptions. Thus, a synergistic bridge between the micro and macro kinetics can be achieved, enabling us to extract unitary rate constants, simulate reaction variance and validate the micro-models using steady-state empirical data. To help facilitate the application of this framework, we make available RedoxMech: a Mathematica™ software package that automates the generation and customization of micro-models. The Mathematica™ source code for RedoxMech, the documentation and the experimental datasets are all available from: http://www.igb.uci.edu/tools/sb/metabolic-modeling. pfbaldi@ics.uci.edu Supplementary data are available at Bioinformatics online.

  20. Reaction time for trimolecular reactions in compartment-based reaction-diffusion models

    NASA Astrophysics Data System (ADS)

    Li, Fei; Chen, Minghan; Erban, Radek; Cao, Yang

    2018-05-01

    Trimolecular reaction models are investigated in the compartment-based (lattice-based) framework for stochastic reaction-diffusion modeling. The formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution under periodic boundary conditions. For the case of reflecting boundary conditions, similar formulae are obtained using a computer-assisted approach. The accuracy of these formulae is further verified through comparison with numerical results. The presented derivation is based on the first passage time analysis of Montroll [J. Math. Phys. 10, 753 (1969)]. Montroll's results for two-dimensional lattice-based random walks are adapted and applied to compartment-based models of trimolecular reactions, which are studied in one-dimensional or pseudo one-dimensional domains.

  1. Reduction of PAPR in coded OFDM using fast Reed-Solomon codes over prime Galois fields

    NASA Astrophysics Data System (ADS)

    Motazedi, Mohammad Reza; Dianat, Reza

    2017-02-01

    In this work, two new techniques using Reed-Solomon (RS) codes over GF(257) and GF(65,537) are proposed for peak-to-average power ratio (PAPR) reduction in coded orthogonal frequency division multiplexing (OFDM) systems. The lengths of these codes are well-matched to the length of OFDM frames. Over these fields, the block lengths of codes are powers of two and we fully exploit the radix-2 fast Fourier transform algorithms. Multiplications and additions are simple modulus operations. These codes provide desirable randomness with a small perturbation in information symbols that is essential for generation of different statistically independent candidates. Our simulations show that the PAPR reduction ability of RS codes is the same as that of conventional selected mapping (SLM), but contrary to SLM, we can get error correction capability. Also for the second proposed technique, the transmission of side information is not needed. To the best of our knowledge, this is the first work using RS codes for PAPR reduction in single-input single-output systems.

  2. An integrated open framework for thermodynamics of reactions that combines accuracy and coverage.

    PubMed

    Noor, Elad; Bar-Even, Arren; Flamholz, Avi; Lubling, Yaniv; Davidi, Dan; Milo, Ron

    2012-08-01

    The laws of thermodynamics describe a direct, quantitative relationship between metabolite concentrations and reaction directionality. Despite great efforts, thermodynamic data suffer from limited coverage, scattered accessibility and non-standard annotations. We present a framework for unifying thermodynamic data from multiple sources and demonstrate two new techniques for extrapolating the Gibbs energies of unmeasured reactions and conditions. Both methods account for changes in cellular conditions (pH, ionic strength, etc.) by using linear regression over the ΔG(○) of pseudoisomers and reactions. The Pseudoisomeric Reactant Contribution method systematically infers compound formation energies using measured K' and pK(a) data. The Pseudoisomeric Group Contribution method extends the group contribution method and achieves a high coverage of unmeasured reactions. We define a continuous index that predicts the reversibility of a reaction under a given physiological concentration range. In the characteristic physiological range 3μM-3mM, we find that roughly half of the reactions in Escherichia coli's metabolism are reversible. These new tools can increase the accuracy of thermodynamic-based models, especially in non-standard pH and ionic strengths. The reversibility index can help modelers decide which reactions are reversible in physiological conditions. Freely available on the web at: http://equilibrator.weizmann.ac.il. Website implemented in Python, MySQL, Apache and Django, with all major browsers supported. The framework is open-source (code.google.com/p/milo-lab), implemented in pure Python and tested mainly on Linux. ron.milo@weizmann.ac.il Supplementary data are available at Bioinformatics online.

  3. An integrated open framework for thermodynamics of reactions that combines accuracy and coverage

    PubMed Central

    Noor, Elad; Bar-Even, Arren; Flamholz, Avi; Lubling, Yaniv; Davidi, Dan; Milo, Ron

    2012-01-01

    Motivation: The laws of thermodynamics describe a direct, quantitative relationship between metabolite concentrations and reaction directionality. Despite great efforts, thermodynamic data suffer from limited coverage, scattered accessibility and non-standard annotations. We present a framework for unifying thermodynamic data from multiple sources and demonstrate two new techniques for extrapolating the Gibbs energies of unmeasured reactions and conditions. Results: Both methods account for changes in cellular conditions (pH, ionic strength, etc.) by using linear regression over the ΔG○ of pseudoisomers and reactions. The Pseudoisomeric Reactant Contribution method systematically infers compound formation energies using measured K′ and pKa data. The Pseudoisomeric Group Contribution method extends the group contribution method and achieves a high coverage of unmeasured reactions. We define a continuous index that predicts the reversibility of a reaction under a given physiological concentration range. In the characteristic physiological range 3μM–3mM, we find that roughly half of the reactions in Escherichia coli's metabolism are reversible. These new tools can increase the accuracy of thermodynamic-based models, especially in non-standard pH and ionic strengths. The reversibility index can help modelers decide which reactions are reversible in physiological conditions. Availability: Freely available on the web at: http://equilibrator.weizmann.ac.il. Website implemented in Python, MySQL, Apache and Django, with all major browsers supported. The framework is open-source (code.google.com/p/milo-lab), implemented in pure Python and tested mainly on Linux. Contact: ron.milo@weizmann.ac.il Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22645166

  4. Flow field description of the Space Shuttle Vernier reaction control system exhaust plumes

    NASA Technical Reports Server (NTRS)

    Cerimele, Mary P.; Alred, John W.

    1987-01-01

    The flow field for the Vernier Reaction Control System (VRCS) jets of the Space Shuttle Orbiter has been calculated from the nozzle throat to the far-field region. The calculations involved the use of recently improved rocket engine nozzle/plume codes. The flow field is discussed, and a brief overview of the calculation techniques is presented. In addition, a proposed on-orbit plume measurement experiment, designed to improve future estimations of the Vernier flow field, is addressed.

  5. The STAGS computer code

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Brogan, F. A.

    1978-01-01

    Basic information about the computer code STAGS (Structural Analysis of General Shells) is presented to describe to potential users the scope of the code and the solution procedures that are incorporated. Primarily, STAGS is intended for analysis of shell structures, although it has been extended to more complex shell configurations through the inclusion of springs and beam elements. The formulation is based on a variational approach in combination with local two dimensional power series representations of the displacement components. The computer code includes options for analysis of linear or nonlinear static stress, stability, vibrations, and transient response. Material as well as geometric nonlinearities are included. A few examples of applications of the code are presented for further illustration of its scope.

  6. Bacterial discrimination by means of a universal array approach mediated by LDR (ligase detection reaction)

    PubMed Central

    Busti, Elena; Bordoni, Roberta; Castiglioni, Bianca; Monciardini, Paolo; Sosio, Margherita; Donadio, Stefano; Consolandi, Clarissa; Rossi Bernardi, Luigi; Battaglia, Cristina; De Bellis, Gianluca

    2002-01-01

    Background PCR amplification of bacterial 16S rRNA genes provides the most comprehensive and flexible means of sampling bacterial communities. Sequence analysis of these cloned fragments can provide a qualitative and quantitative insight of the microbial population under scrutiny although this approach is not suited to large-scale screenings. Other methods, such as denaturing gradient gel electrophoresis, heteroduplex or terminal restriction fragment analysis are rapid and therefore amenable to field-scale experiments. A very recent addition to these analytical tools is represented by microarray technology. Results Here we present our results using a Universal DNA Microarray approach as an analytical tool for bacterial discrimination. The proposed procedure is based on the properties of the DNA ligation reaction and requires the design of two probes specific for each target sequence. One oligo carries a fluorescent label and the other a unique sequence (cZipCode or complementary ZipCode) which identifies a ligation product. Ligated fragments, obtained in presence of a proper template (a PCR amplified fragment of the 16s rRNA gene) contain either the fluorescent label or the unique sequence and therefore are addressed to the location on the microarray where the ZipCode sequence has been spotted. Such an array is therefore "Universal" being unrelated to a specific molecular analysis. Here we present the design of probes specific for some groups of bacteria and their application to bacterial diagnostics. Conclusions The combined use of selective probes, ligation reaction and the Universal Array approach yielded an analytical procedure with a good power of discrimination among bacteria. PMID:12243651

  7. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  8. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  9. Superimposed Code Theoretic Analysis of DNA Codes and DNA Computing

    DTIC Science & Technology

    2008-01-01

    complements of one another and the DNA duplex formed is a Watson - Crick (WC) duplex. However, there are many instances when the formation of non-WC...that the user’s requirements for probe selection are met based on the Watson - Crick probe locality within a target. The second type, called...AFRL-RI-RS-TR-2007-288 Final Technical Report January 2008 SUPERIMPOSED CODE THEORETIC ANALYSIS OF DNA CODES AND DNA COMPUTING

  10. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Velkov, K.; Lizorkin, M.

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  11. Securing mobile code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas

    2004-10-01

    If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware ismore » necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and

  12. The PHITS code for space applications: status and recent developments

    NASA Astrophysics Data System (ADS)

    Sihver, Lembit; Ploc, Ondrej; Sato, Tatsuhiko; Niita, Koji; Hashimoto, Shintaro; El-Jaby, Samy

    Since COSPAR 2012, the Particle and Heavy Ion Transport code System, PHITS, has been upgraded and released to the public [1]. The code has been improved and so has the contents of its package, such as the attached data libraries. In the new version, the intra-nuclear cascade models INCL4.6 and INC-ELF have been implemented as well as the Kurotama model for the total reaction cross sections. The accuracies of the new reaction models for transporting the galactic cosmic-rays were investigated by comparing with experimental data. The incorporation of these models has improved the capabilities of PHITS to perform particle transport simulations for different space applications. A methodology for assessing the pre-mission exposure of space crew aboard the ISS has been developed in terms of an effective dose equivalent [2]. PHITS was used to calculate the particle transport of the GCR and trapped radiation through the hull of the ISS. By using the predicted spectra, and fluence-to-dose conversion factors, the semi-empirical ISSCREM [3,4,5] code was then scaled to predict the effective dose equivalent. This methodology provides an opportunity for pre-flight predictions of the effective dose equivalent, which can be compared to post-flight estimates, and therefore offers a means to assess the impact of radiation exposure on ISS flight crew. We have also simulated [6] the protective curtain experiment, which was performed to test the efficiency of water-soaked hygienic tissue wipes and towels as a simple and cost-effective additional spacecraft shielding. The dose from the trapped particles and low energetic GCR, was significantly reduced, which shows that the protective curtains are efficient when they are applied on spacecraft at LEO. The results of these benchmark calculations, as well as the mentioned applications of PHITS to space dosimetry, will be presented. [1] T. Sato et al. J. Nucl. Sci. Technol. 50, 913-923 (2013). [2] S. El-Jaby, et al. Adv. Space Res. doi: http

  13. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  14. Efficient Polar Coding of Quantum Information

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato

    2012-08-01

    Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.

  15. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  16. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  17. A subset of conserved mammalian long non-coding RNAs are fossils of ancestral protein-coding genes.

    PubMed

    Hezroni, Hadas; Ben-Tov Perry, Rotem; Meir, Zohar; Housman, Gali; Lubelsky, Yoav; Ulitsky, Igor

    2017-08-30

    Only a small portion of human long non-coding RNAs (lncRNAs) appear to be conserved outside of mammals, but the events underlying the birth of new lncRNAs in mammals remain largely unknown. One potential source is remnants of protein-coding genes that transitioned into lncRNAs. We systematically compare lncRNA and protein-coding loci across vertebrates, and estimate that up to 5% of conserved mammalian lncRNAs are derived from lost protein-coding genes. These lncRNAs have specific characteristics, such as broader expression domains, that set them apart from other lncRNAs. Fourteen lncRNAs have sequence similarity with the loci of the contemporary homologs of the lost protein-coding genes. We propose that selection acting on enhancer sequences is mostly responsible for retention of these regions. As an example of an RNA element from a protein-coding ancestor that was retained in the lncRNA, we describe in detail a short translated ORF in the JPX lncRNA that was derived from an upstream ORF in a protein-coding gene and retains some of its functionality. We estimate that ~ 55 annotated conserved human lncRNAs are derived from parts of ancestral protein-coding genes, and loss of coding potential is thus a non-negligible source of new lncRNAs. Some lncRNAs inherited regulatory elements influencing transcription and translation from their protein-coding ancestors and those elements can influence the expression breadth and functionality of these lncRNAs.

  18. Investigation of Near Shannon Limit Coding Schemes

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Kim, J.; Mo, Fan

    1999-01-01

    Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

  19. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  20. The Coding Question.

    PubMed

    Gallistel, C R

    2017-07-01

    Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.