Sample records for penelope code simulacao

  1. Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.

    2002-09-11

    The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions ofmore » a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.« less

  2. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  3. Multiple scattering of 13 and 20 MeV electrons by thin foils: a Monte Carlo study with GEANT, Geant4, and PENELOPE.

    PubMed

    Vilches, M; García-Pareja, S; Guerrero, R; Anguiano, M; Lallena, A M

    2009-09-01

    In this work, recent results from experiments and simulations (with EGSnrc) performed by Ross et al. [Med. Phys. 35, 4121-4131 (2008)] on electron scattering by foils of different materials and thicknesses are compared to those obtained using several Monte Carlo codes. Three codes have been used: GEANT (version 3.21), Geant4 (version 9.1, patch03), and PENELOPE (version 2006). In the case of PENELOPE, mixed and fully detailed simulations have been carried out. Transverse dose distributions in air have been obtained in order to compare with measurements. The detailed PENELOPE simulations show excellent agreement with experiment. The calculations performed with GEANT and PENELOPE (mixed) agree with experiment within 3% except for the Be foil. In the case of Geant4, the distributions are 5% narrower compared to the experimental ones, though the agreement is very good for the Be foil. Transverse dose distribution in water obtained with PENELOPE (mixed) is 4% wider than those calculated by Ross et al. using EGSnrc and is 1% narrower than the transverse dose distributions in air, as considered in the experiment. All the codes give a reasonable agreement (within 5%) with the experimental results for all the material and thicknesses studied.

  4. Monte Carlo simulation of electron beams from an accelerator head using PENELOPE.

    PubMed

    Sempau, J; Sánchez-Reyes, A; Salvat, F; ben Tahar, H O; Jiang, S B; Fernández-Varea, J M

    2001-04-01

    The Monte Carlo code PENELOPE has been used to simulate electron beams from a Siemens Mevatron KDS linac with nominal energies of 6, 12 and 18 MeV. Owing to its accuracy, which stems from that of the underlying physical interaction models, PENELOPE is suitable for simulating problems of interest to the medical physics community. It includes a geometry package that allows the definition of complex quadric geometries, such as those of irradiation instruments, in a straightforward manner. Dose distributions in water simulated with PENELOPE agree well with experimental measurements using a silicon detector and a monitoring ionization chamber. Insertion of a lead slab in the incident beam at the surface of the water phantom produces sharp variations in the dose distributions, which are correctly reproduced by the simulation code. Results from PENELOPE are also compared with those of equivalent simulations with the EGS4-based user codes BEAM and DOSXYZ. Angular and energy distributions of electrons and photons in the phase-space plane (at the downstream end of the applicator) obtained from both simulation codes are similar, although significant differences do appear in some cases. These differences, however, are shown to have a negligible effect on the calculated dose distributions. Various practical aspects of the simulations, such as the calculation of statistical uncertainties and the effect of the 'latent' variance in the phase-space file, are discussed in detail.

  5. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expandedmore » to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  6. TH-AB-BRA-07: PENELOPE-Based GPU-Accelerated Dose Calculation System Applied to MRI-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y; Mazur, T; Green, O

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: We first translated PENELOPE from FORTRAN to C++ and validated that the translation produced equivalent results. Then we adapted the C++ code to CUDA in a workflow optimized for GPU architecture. We expanded upon the original code to include voxelized transportmore » boosted by Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, we incorporated the vendor-provided MRIdian head model into the code. We performed a set of experimental measurements on MRIdian to examine the accuracy of both the head model and gPENELOPE, and then applied gPENELOPE toward independent validation of patient doses calculated by MRIdian’s KMC. Results: We achieve an average acceleration factor of 152 compared to the original single-thread FORTRAN implementation with the original accuracy preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen (1), mediastinum (1) and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: We developed a Monte Carlo simulation platform based on a GPU-accelerated version of PENELOPE. We validated that both the vendor provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  7. Cellular dosimetry calculations for Strontium-90 using Monte Carlo code PENELOPE.

    PubMed

    Hocine, Nora; Farlay, Delphine; Boivin, Georges; Franck, Didier; Agarande, Michelle

    2014-11-01

    To improve risk assessments associated with chronic exposure to Strontium-90 (Sr-90), for both the environment and human health, it is necessary to know the energy distribution in specific cells or tissue. Monte Carlo (MC) simulation codes are extremely useful tools for calculating deposition energy. The present work was focused on the validation of the MC code PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and the assessment of dose distribution to bone marrow cells from punctual Sr-90 source localized within the cortical bone part. S-values (absorbed dose per unit cumulated activity) calculations using Monte Carlo simulations were performed by using PENELOPE and Monte Carlo N-Particle eXtended (MCNPX). Cytoplasm, nucleus, cell surface, mouse femur bone and Sr-90 radiation source were simulated. Cells are assumed to be spherical with the radii of the cell and cell nucleus ranging from 2-10 μm. The Sr-90 source is assumed to be uniformly distributed in cell nucleus, cytoplasm and cell surface. The comparison of S-values calculated with PENELOPE to MCNPX results and the Medical Internal Radiation Dose (MIRD) values agreed very well since the relative deviations were less than 4.5%. The dose distribution to mouse bone marrow cells showed that the cells localized near the cortical part received the maximum dose. The MC code PENELOPE may prove useful for cellular dosimetry involving radiation transport through materials other than water, or for complex distributions of radionuclides and geometries.

  8. Monte Carlo dosimetric characterization of the Flexisource Co-60 high-dose-rate brachytherapy source using PENELOPE.

    PubMed

    Almansa, Julio F; Guerrero, Rafael; Torres, Javier; Lallena, Antonio M

    60 Co sources have been commercialized as an alternative to 192 Ir sources for high-dose-rate (HDR) brachytherapy. One of them is the Flexisource Co-60 HDR source manufactured by Elekta. The only available dosimetric characterization of this source is that of Vijande et al. [J Contemp Brachytherapy 2012; 4:34-44], whose results were not included in the AAPM/ESTRO consensus document. In that work, the dosimetric quantities were calculated as averages of the results obtained with the Geant4 and PENELOPE Monte Carlo (MC) codes, though for other sources, significant differences have been quoted between the values obtained with these two codes. The aim of this work is to perform the dosimetric characterization of the Flexisource Co-60 HDR source using PENELOPE. The MC simulation code PENELOPE (v. 2014) has been used. Following the recommendations of the AAPM/ESTRO report, the radial dose function, the anisotropy function, the air-kerma strength, the dose rate constant, and the absorbed dose rate in water have been calculated. The results we have obtained exceed those of Vijande et al. In particular, the absorbed dose rate constant is ∼0.85% larger. A similar difference is also found in the other dosimetric quantities. The effect of the electrons emitted in the decay of 60 Co, usually neglected in this kind of simulations, is significant up to the distances of 0.25 cm from the source. The systematic and significant differences we have found between PENELOPE results and the average values found by Vijande et al. point out that the dosimetric characterizations carried out with the various MC codes should be provided independently. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  9. Formal specification and verification of Ada software

    NASA Technical Reports Server (NTRS)

    Hird, Geoffrey R.

    1991-01-01

    The use of formal methods in software development achieves levels of quality assurance unobtainable by other means. The Larch approach to specification is described, and the specification of avionics software designed to implement the logic of a flight control system is given as an example. Penelope is described which is an Ada-verification environment. The Penelope user inputs mathematical definitions, Larch-style specifications and Ada code and performs machine-assisted proofs that the code obeys its specifications. As an example, the verification of a binary search function is considered. Emphasis is given to techniques assisting the reuse of a verification effort on modified code.

  10. Monte Carlo Simulation of X-Ray Spectra in Mammography and Contrast-Enhanced Digital Mammography Using the Code PENELOPE

    NASA Astrophysics Data System (ADS)

    Cunha, Diego M.; Tomal, Alessandra; Poletti, Martin E.

    2013-04-01

    In this work, the Monte Carlo (MC) code PENELOPE was employed for simulation of x-ray spectra in mammography and contrast-enhanced digital mammography (CEDM). Spectra for Mo, Rh and W anodes were obtained for tube potentials between 24-36 kV, for mammography, and between 45-49 kV, for CEDM. The spectra obtained from the simulations were analytically filtered to correspond to the anode/filter combinations usually employed in each technique (Mo/Mo, Rh/Rh and W/Rh for mammography and Mo/Cu, Rh/Cu and W/Cu for CEDM). For the Mo/Mo combination, the simulated spectra were compared with those obtained experimentally, and for spectra for the W anode, with experimental data from the literature, through comparison of distribution shape, average energies, half-value layers (HVL) and transmission curves. For all combinations evaluated, the simulated spectra were also compared with those provided by different models from the literature. Results showed that the code PENELOPE provides mammographic x-ray spectra in good agreement with those experimentally measured and those from the literature. The differences in the values of HVL ranged between 2-7%, for anode/filter combinations and tube potentials employed in mammography, and they were less than 5% for those employed in CEDM. The transmission curves for the spectra obtained also showed good agreement compared to those computed from reference spectra, with average relative differences less than 12% for mammography and CEDM. These results show that the code PENELOPE can be a useful tool to generate x-ray spectra for studies in mammography and CEDM, and also for evaluation of new x-ray tube designs and new anode materials.

  11. Monte Carlo determination of the conversion coefficients Hp(3)/Ka in a right cylinder phantom with 'PENELOPE' code. Comparison with 'MCNP' simulations.

    PubMed

    Daures, J; Gouriou, J; Bordy, J M

    2011-03-01

    This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.

  12. MCNP6.1 simulations for low-energy atomic relaxation: Code-to-code comparison with GATEv7.2, PENELOPE2014, and EGSnrc

    NASA Astrophysics Data System (ADS)

    Jung, Seongmoon; Sung, Wonmo; Lee, Jaegi; Ye, Sung-Joon

    2018-01-01

    Emerging radiological applications of gold nanoparticles demand low-energy electron/photon transport calculations including details of an atomic relaxation process. Recently, MCNP® version 6.1 (MCNP6.1) has been released with extended cross-sections for low-energy electron/photon, subshell photoelectric cross-sections, and more detailed atomic relaxation data than the previous versions. With this new feature, the atomic relaxation process of MCNP6.1 has not been fully tested yet with its new physics library (eprdata12) that is based on the Evaluated Atomic Data Library (EADL). In this study, MCNP6.1 was compared with GATEv7.2, PENELOPE2014, and EGSnrc that have been often used to simulate low-energy atomic relaxation processes. The simulations were performed to acquire both photon and electron spectra produced by interactions of 15 keV electrons or photons with a 10-nm-thick gold nano-slab. The photon-induced fluorescence X-rays from MCNP6.1 fairly agreed with those from GATEv7.2 and PENELOPE2014, while the electron-induced fluorescence X-rays of the four codes showed more or less discrepancies. A coincidence was observed in the photon-induced Auger electrons simulated by MCNP6.1 and GATEv7.2. A recent release of MCNP6.1 with eprdata12 can be used to simulate the photon-induced atomic relaxation.

  13. Formally specifying the logic of an automatic guidance controller

    NASA Technical Reports Server (NTRS)

    Guaspari, David

    1990-01-01

    The following topics are covered in viewgraph form: (1) the Penelope Project; (2) the logic of an experimental automatic guidance control system for a 737; (3) Larch/Ada specification; (4) some failures of informal description; (5) description of mode changes caused by switches; (6) intuitive description of window status (chosen vs. current); (7) design of the code; (8) and specifying the code.

  14. VoxelMages: a general-purpose graphical interface for designing geometries and processing DICOM images for PENELOPE.

    PubMed

    Giménez-Alventosa, V; Ballester, F; Vijande, J

    2016-12-01

    The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Assessment of ionization chamber correction factors in photon beams using a time saving strategy with PENELOPE code.

    PubMed

    Reis, C Q M; Nicolucci, P

    2016-02-01

    The purpose of this study was to investigate Monte Carlo-based perturbation and beam quality correction factors for ionization chambers in photon beams using a saving time strategy with PENELOPE code. Simulations for calculating absorbed doses to water using full spectra of photon beams impinging the whole water phantom and those using a phase-space file previously stored around the point of interest were performed and compared. The widely used NE2571 ionization chamber was modeled with PENELOPE using data from the literature in order to calculate absorbed doses to the air cavity of the chamber. Absorbed doses to water at reference depth were also calculated for providing the perturbation and beam quality correction factors for that chamber in high energy photon beams. Results obtained in this study show that simulations with phase-space files appropriately stored can be up to ten times shorter than using a full spectrum of photon beams in the input-file. Values of kQ and its components for the NE2571 ionization chamber showed good agreement with published values in the literature and are provided with typical statistical uncertainties of 0.2%. Comparisons to kQ values published in current dosimetry protocols such as the AAPM TG-51 and IAEA TRS-398 showed maximum percentage differences of 0.1% and 0.6% respectively. The proposed strategy presented a significant efficiency gain and can be applied for a variety of ionization chambers and clinical photon beams. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. SU-E-T-178: Optically Stimulated Luminescence (OSL) Dosimetry: A Study of A-Al2O3:C Assessed by PENELOPE Monte Carlo Simulation.

    PubMed

    Nicolucci, P; Schuch, F

    2012-06-01

    To use the Monte Carlo code PENELOPE to study attenuation and tissue equivalence properties of a-Al2O3:C for OSL dosimetry. Mass attenuation coefficients of α-Al2O3 and α-Al2O3:C with carbon percent weight concentrations from 1% to 150% were simulated with PENELOPE Monte Carlo code and compared to mass attenuation coefficients from soft tissue for photon beams ranging from 50kV to 10MV. Also, the attenuation of primary photon beams of 6MV and 10MV and the generation of secondary electrons by α-Al2O3 :C dosimeters positioned on the entrance surface of a water phantom were studied. A difference of up to 90% was found in the mass attenuation coefficient between the pure \\agr;-A12O3 and the material with 150% weight concentration of dopant at 1.5 keV, corresponding to the K-edge photoelectric absorption of aluminum. However for energies above 80 keV the concentration of carbon does not affect the mass attenuation coefficient and the material presents tissue equivalence for the beams studied. The ratio between the mass attenuation coefficients for \\agr-A12O3:C and for soft tissue are less than unit due to the higher density of the \\agr-A12O3 (2.12 g/cm s ) and its tissue equivalence diminishes to lower concentrations of carbon and for lower energies due to the relation of the radiation interaction effects with atomic number. The larger attenuation of the primary photon beams by the dosimeter was 16% at 250 keV and the maximum increase in secondary electrons fluence to the entrance surface of the phantom was found as 91% at 2MeV. The use of the OSL dosimeters in radiation therapy can be optimized by use of PENELOPE Monte Carlo simulation to provide a study of the attenuation and response characteristics of the material. © 2012 American Association of Physicists in Medicine.

  17. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    PubMed Central

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold

    2016-01-01

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian’s kmc. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems. PMID:27370123

  18. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model.

    PubMed

    Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold

    2016-07-01

    The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.

  19. Contact radiotherapy using a 50 kV X-ray system: Evaluation of relative dose distribution with the Monte Carlo code PENELOPE and comparison with measurements

    NASA Astrophysics Data System (ADS)

    Croce, Olivier; Hachem, Sabet; Franchisseur, Eric; Marcié, Serge; Gérard, Jean-Pierre; Bordy, Jean-Marc

    2012-06-01

    This paper presents a dosimetric study concerning the system named "Papillon 50" used in the department of radiotherapy of the Centre Antoine-Lacassagne, Nice, France. The machine provides a 50 kVp X-ray beam, currently used to treat rectal cancers. The system can be mounted with various applicators of different diameters or shapes. These applicators can be fixed over the main rod tube of the unit in order to deliver the prescribed absorbed dose into the tumor with an optimal distribution. We have analyzed depth dose curves and dose profiles for the naked tube and for a set of three applicators. Dose measurements were made with an ionization chamber (PTW type 23342) and Gafchromic films (EBT2). We have also compared the measurements with simulations performed using the Monte Carlo code PENELOPE. Simulations were performed with a detailed geometrical description of the experimental setup and with enough statistics. Results of simulations are made in accordance with experimental measurements and provide an accurate evaluation of the dose delivered. The depths of the 50% isodose in water for the various applicators are 4.0, 6.0, 6.6 and 7.1 mm. The Monte Carlo PENELOPE simulations are in accordance with the measurements for a 50 kV X-ray system. Simulations are able to confirm the measurements provided by Gafchromic films or ionization chambers. Results also demonstrate that Monte Carlo simulations could be helpful to validate the future applicators designed for other localizations such as breast or skin cancers. Furthermore, Monte Carlo simulations could be a reliable alternative for a rapid evaluation of the dose delivered by such a system that uses multiple designs of applicators.

  20. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    NASA Astrophysics Data System (ADS)

    España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M

    2009-03-01

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  1. SU-F-T-367: Using PRIMO, a PENELOPE-Based Software, to Improve the Small Field Dosimetry of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benmakhlouf, H; Andreo, P; Brualla, L

    2016-06-15

    Purpose: To calculate output correction factors for Varian Clinac 2100iX beams for seven small field detectors and use the values to determine the small field output factors for the linacs at Karolinska university hospital. Methods: Phase space files (psf) for square fields between 0.25cm and 10cm were calculated using the PENELOPE-based PRIMO software. The linac MC-model was tuned by comparing PRIMO-estimated and experimentally determined depth doses and lateral dose-profiles for 40cmx40cm fields. The calculated psf were used as radiation sources to calculate the correction factors of IBA and PTW detectors with the code penEasy/PENELOPE. Results: The optimal tuning parameters ofmore » the MClinac model in PRIMO were 5.4 MeV incident electron energy and zero energy spread, focal spot size and beam divergence. Correction factors obtained for the liquid ion chamber (PTW-T31018) are within 1% down to 0.5 cm fields. For unshielded diodes (IBA-EFD, IBA-SFD, PTW-T60017 and PTW-T60018) the corrections are up to 2% at intermediate fields (>1cm side), becoming down to −11% for fields smaller than 1cm. The shielded diode (IBA-PFD and PTW-T60016) corrections vary with field size from 0 to −4%. Volume averaging effects are found for most detectors in the presence of 0.25cm fields. Conclusion: Good agreement was found between correction factors based on PRIMO-generated psf and those from other publications. The calculated factors will be implemented in output factor measurements (using several detectors) in the clinic. PRIMO is a userfriendly general code capable of generating small field psf and can be used without having to code own linac geometries. It can therefore be used to improve the clinical dosimetry, especially in the commissioning of linear accelerators. Important dosimetry data, such as dose-profiles and output factors can be determined more accurately for a specific machine, geometry and setup by using PRIMO and having a MC-model of the detector used.« less

  2. Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code

    NASA Astrophysics Data System (ADS)

    Panettieri, Vanessa; Amor Duch, Maria; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat

    2007-01-01

    The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm2 and a thickness of 0.5 µm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water™ build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water™ cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.

  3. Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code.

    PubMed

    Panettieri, Vanessa; Duch, Maria Amor; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat

    2007-01-07

    The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm(2) and a thickness of 0.5 microm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.

  4. Validation of a modified PENELOPE Monte Carlo code for applications in digital and dual-energy mammography

    NASA Astrophysics Data System (ADS)

    Del Lama, L. S.; Cunha, D. M.; Poletti, M. E.

    2017-08-01

    The presence and morphology of microcalcification clusters are the main point to provide early indications of breast carcinomas. However, the visualization of those structures may be jeopardized due to overlapping tissues even for digital mammography systems. Although digital mammography is the current standard for breast cancer diagnosis, further improvements should be achieved in order to address some of those physical limitations. One possible solution for such issues is the application of the dual-energy technique (DE), which is able to highlight specific lesions or cancel out the tissue background. In this sense, this work aimed to evaluate several quantities of interest in radiation applications and compare those values with works present in the literature to validate a modified PENELOPE code for digital mammography applications. For instance, the scatter-to-primary ratio (SPR), the scatter fraction (SF) and the normalized mean glandular dose (DgN) were evaluated by simulations and the resulting values were compared to those found in earlier studies. Our results present a good correlation for the evaluated quantities, showing agreement equal or better than 5% for the scatter and dosimetric-related quantities when compared to the literature. Finally, a DE imaging chain was simulated and the visualization of microcalcifications was investigated.

  5. Effect of the multiple scattering of electrons in Monte Carlo simulation of LINACS.

    PubMed

    Vilches, Manuel; García-Pareja, Salvador; Guerrero, Rafael; Anguiano, Marta; Lallena, Antonio M

    2008-01-01

    Results obtained from Monte Carlo simulations of the transport of electrons in thin slabs of dense material media and air slabs with different widths are analyzed. Various general purpose Monte Carlo codes have been used: PENELOPE, GEANT3, GEANT4, EGSNRC, MCNPX. Non-negligible differences between the angular and radial distributions after the slabs have been found. The effects of these differences on the depth doses measured in water are also discussed.

  6. Use of simulation to optimize the pinhole diameter and mask thickness for an x-ray backscatter imaging system

    NASA Astrophysics Data System (ADS)

    Vella, A.; Munoz, Andre; Healy, Matthew J. F.; Lane, David; Lockley, D.

    2017-08-01

    The PENELOPE Monte Carlo simulation code was used to determine the optimum thickness and aperture diameter of a pinhole mask for X-ray backscatter imaging in a security application. The mask material needs to be thick enough to absorb most X-rays, and the pinhole must be wide enough for sufficient field of view whilst narrow enough for sufficient image spatial resolution. The model consisted of a fixed geometry test object, various masks with and without pinholes, and a 1040 x 1340 pixels' area detector inside a lead lined camera housing. The photon energy distribution incident upon masks was flat up to selected energy limits. This artificial source was used to avoid the optimisation being specific to any particular X-ray source technology. The pixelated detector was modelled by digitising the surface area represented by the PENELOPE phase space file and integrating the energies of the photons impacting within each pixel; a MATLAB code was written for this. The image contrast, signal to background ratio, spatial resolution, and collimation effect were calculated at the simulated detector as a function of pinhole diameter and various thicknesses of mask made of tungsten, tungsten/epoxy composite or bismuth alloy. A process of elimination was applied to identify suitable masks for a viable X-ray backscattering security application.

  7. Application of the CIEMAT-NIST method to plastic scintillation microspheres.

    PubMed

    Tarancón, A; Barrera, J; Santiago, L M; Bagán, H; García, J F

    2015-04-01

    An adaptation of the MICELLE2 code was used to apply the CIEMAT-NIST tracing method to the activity calculation for radioactive solutions of pure beta emitters of different energies using plastic scintillation microspheres (PSm) and (3)H as a tracing radionuclide. Particle quenching, very important in measurements with PSm, was computed with PENELOPE using geometries formed by a heterogeneous mixture of polystyrene microspheres and water. The results obtained with PENELOPE were adapted to be included in MICELLE2, which is capable of including the energy losses due to particle quenching in the computation of the detection efficiency. The activity calculation of (63)Ni, (14)C, (36)Cl and (90)Sr/(90)Y solutions was performed with deviations of 8.8%, 1.9%, 1.4% and 2.1%, respectively. Of the different parameters evaluated, those with the greatest impact on the activity calculation are, in order of importance, the energy of the radionuclide, the degree of quenching of the sample and the packing fraction of the geometry used in the computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botta, F; Di Dia, A; Pedroli, G

    The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK),more » quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9·X90, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution.Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less

  9. SU-E-T-626: Accuracy of Dose Calculation Algorithms in MultiPlan Treatment Planning System in Presence of Heterogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, C; Huet, C; Barraux, V

    Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less

  10. SU-E-T-36: A GPU-Accelerated Monte-Carlo Dose Calculation Platform and Its Application Toward Validating a ViewRay Beam Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y; Mazur, T; Green, O

    Purpose: To build a fast, accurate and easily-deployable research platform for Monte-Carlo dose calculations. We port the dose calculation engine PENELOPE to C++, and accelerate calculations using GPU acceleration. Simulations of a Co-60 beam model provided by ViewRay demonstrate the capabilities of the platform. Methods: We built software that incorporates a beam model interface, CT-phantom model, GPU-accelerated PENELOPE engine, and GUI front-end. We rewrote the PENELOPE kernel in C++ (from Fortran) and accelerated the code on a GPU. We seamlessly integrated a Co-60 beam model (obtained from ViewRay) into our platform. Simulations of various field sizes and SSDs using amore » homogeneous water phantom generated PDDs, dose profiles, and output factors that were compared to experiment data. Results: With GPU acceleration using a dated graphics card (Nvidia Tesla C2050), a highly accurate simulation – including 100*100*100 grid, 3×3×3 mm3 voxels, <1% uncertainty, and 4.2×4.2 cm2 field size – runs 24 times faster (20 minutes versus 8 hours) than when parallelizing on 8 threads across a new CPU (Intel i7-4770). Simulated PDDs, profiles and output ratios for the commercial system agree well with experiment data measured using radiographic film or ionization chamber. Based on our analysis, this beam model is precise enough for general applications. Conclusions: Using a beam model for a Co-60 system provided by ViewRay, we evaluate a dose calculation platform that we developed. Comparison to measurements demonstrates the promise of our software for use as a research platform for dose calculations, with applications including quality assurance and treatment plan verification.« less

  11. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.

    PubMed

    Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel

    2004-06-21

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  12. SU-E-T-102: Determination of Dose Distributions and Water-Equivalence of MAGIC-F Polymer Gel for 60Co and 192Ir Brachytherapy Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quevedo, A; Nicolucci, P

    2014-06-01

    Purpose: Analyse the water-equivalence of MAGIC-f polymer gel for {sup 60}Co and {sup 192}Ir clinical brachytherapy sources, through dose distributions simulated with PENELOPE Monte Carlo code. Methods: The real geometry of {sup 60} (BEBIG, modelo Co0.A86) and {sup 192}192Ir (Varian, model GammaMed Plus) clinical brachytherapy sources were modelled on PENELOPE Monte Carlo simulation code. The most probable emission lines of photons were used for both sources: 17 emission lines for {sup 192}Ir and 12 lines for {sup 60}. The dose distributions were obtained in a cubic water or gel homogeneous phantom (30 × 30 × 30 cm{sup 3}), with themore » source positioned in the middle of the phantom. In all cases the number of simulation showers remained constant at 10{sup 9} particles. A specific material for gel was constructed in PENELOPE using weight fraction components of MAGIC-f: wH = 0,1062, wC = 0,0751, wN = 0,0139, wO = 0,8021, wS = 2,58×10{sup −6} e wCu = 5,08 × 10{sup −6}. The voxel size in the dose distributions was 0.6 mm. Dose distribution maps on the longitudinal and radial direction through the centre of the source were used to analyse the water-equivalence of MAGIC-f. Results: For the {sup 60} source, the maximum diferences in relative doses obtained in the gel and water were 0,65% and 1,90%, for radial and longitudinal direction, respectively. For {sup 192}Ir, the maximum difereces in relative doses were 0,30% and 1,05%, for radial and longitudinal direction, respectively. The materials equivalence can also be verified through the effective atomic number and density of each material: Zef-MAGIC-f = 7,07 e .MAGIC-f = 1,060 g/cm{sup 3} and Zef-water = 7,22. Conclusion: The results showed that MAGIC-f is water equivalent, consequently being suitable to simulate soft tissue, for Cobalt and Iridium energies. Hence, gel can be used as a dosimeter in clinical applications. Further investigation to its use in a clinical protocol is needed.« less

  13. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botta, F.; Mairani, A.; Battistoni, G.

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernelmore » (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons, within 0.8{center_dot}R{sub CSDA} (where 90%-97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9{center_dot}X{sub 90}, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution. Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less

  14. Simulation of MeV electron energy deposition in CdS quantum dots absorbed in silicate glass for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Baharin, R.; Hobson, P. R.; Smith, D. R.

    2010-09-01

    We are currently developing 2D dosimeters with optical readout based on CdS or CdS/CdSe core-shell quantum-dots using commercially available materials. In order to understand the limitations on the measurement of a 2D radiation profile the 3D deposited energy profile of MeV energy electrons in CdS quantum-dot-doped silica glass have been studied by Monte Carlo simulation using the CASINO and PENELOPE codes. Profiles for silica glass and CdS quantum-dot-doped silica glass were then compared.

  15. Track structure of protons and other light ions in liquid water: applications of the LIonTrack code at the nanometer scale.

    PubMed

    Bäckström, G; Galassi, M E; Tilly, N; Ahnesjö, A; Fernández-Varea, J M

    2013-06-01

    The LIonTrack (Light Ion Track) Monte Carlo (MC) code for the simulation of H(+), He(2+), and other light ions in liquid water is presented together with the results of a novel investigation of energy-deposition site properties from single ion tracks. The continuum distorted-wave formalism with the eikonal initial state approximation (CDW-EIS) is employed to generate the initial energy and angle of the electrons emitted in ionizing collisions of the ions with H2O molecules. The model of Dingfelder et al. ["Electron inelastic-scattering cross sections in liquid water," Radiat. Phys. Chem. 53, 1-18 (1998); "Comparisons of calculations with PARTRAC and NOREC: Transport of electrons in liquid water," Radiat. Res. 169, 584-594 (2008)] is linked to the general-purpose MC code PENELOPE/penEasy to simulate the inelastic interactions of the secondary electrons in liquid water. In this way, the extended PENELOPE/penEasy code may provide an improved description of the 3D distribution of energy deposits (EDs), making it suitable for applications at the micrometer and nanometer scales. Single-ionization cross sections calculated with the ab initio CDW-EIS formalism are compared to available experimental values, some of them reported very recently, and the theoretical electronic stopping powers are benchmarked against those recommended by the ICRU. The authors also analyze distinct aspects of the spatial patterns of EDs, such as the frequency of nearest-neighbor distances for various radiation qualities, and the variation of the mean specific energy imparted in nanoscopic targets located around the track. For 1 MeV/u particles, the C(6+) ions generate about 15 times more clusters of six EDs within an ED distance of 3 nm than H(+). On average clusters of two to three EDs for 1 MeV/u H(+) and clusters of four to five EDs for 1 MeV/u C(6+) could be expected for a modeling double strand break distance of 3.4 nm.

  16. A simplified model of the source channel of the Leksell GammaKnife® tested with PENELOPE

    NASA Astrophysics Data System (ADS)

    Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel

    2004-06-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife®. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3° with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between rgr = (x2 + y2)1/2 and their polar angle thgr, on one side, and between tan-1(y/x) and their azimuthal angle phgr, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  17. Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object

    NASA Astrophysics Data System (ADS)

    Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jeanpierre; Crespin, Sylvain

    2011-03-01

    Last year, we demonstrated the feasibility of a new method to perform dosimetric quality control of Treatment Planning Systems in radiotherapy, this method is based on Monte-Carlo simulations and uses anatomical Digital Test Objects (DTOs). The pelvic DTO was used in order to assess this new method on an ECLIPSE VARIAN Treatment Planning System. Large dose variations were observed particularly in air and bone equivalent material. In this current work, we discuss the results of the previous paper and provide an explanation for observed dose differences, the VARIAN Eclipse (Anisotropic Analytical) algorithm was investigated. Monte Carlo simulations (MC) were performed with a PENELOPE code version 2003. To increase efficiency of MC simulations, we have used our parallelized version based on the standard MPI (Message Passing Interface). The parallel code has been run on a 32- processor SGI cluster. The study was carried out using pelvic DTOs and was performed for low- and high-energy photon beams (6 and 18MV) on 2100CD VARIAN linear accelerator. A square field (10x10 cm2) was used. Assuming the MC data as reference, χ index analyze was carried out. For this study, a distance to agreement (DTA) was set to 7mm while the dose difference was set to 5% as recommended in the TRS-430 and TG-53 (on the beam axis in 3-D inhomogeneities). When using Monte Carlo PENELOPE, the absorbed dose is computed to the medium, however the TPS computes dose to water. We have used the method described by Siebers et al. based on Bragg-Gray cavity theory to convert MC simulated dose to medium to dose to water. Results show a strong consistency between ECLIPSE and MC calculations on the beam axis.

  18. Monte Carlo investigation of positron annihilation in medical positron emission tomography

    NASA Astrophysics Data System (ADS)

    Chin, P. W.; Spyrou, N. M.

    2007-09-01

    A number of Monte Carlo codes are available for simulating positron emission tomography (PET), however, physics approximations differ. A number of radiation processes are deemed negligible, some without rigorous investigation. Some PET literature quantify approximations to be valid, without citing the data source. The radiation source is the first step in Monte Carlo simulations, for some codes this is 511 keV photons 180° apart, not polyenergetic positrons with radiation histories of their own. Without prior assumptions, we investigated electron-positron annihilation under clinical PET conditions. Just before annihilation, we tallied the positron energy and position. Right after annihilation, we tallied the energy and separation angle of photon pairs. When comparing PET textbooks with theory, PENELOPE and EGSnrc, only the latter three agreed. From 10 6 radiation histories, a positron source of 15O in a chest phantom annihilated at as high as 1.58 MeV, producing photons with energies 0.30-2.20 MeV, 79-180° apart. From 10 6 radiation histories, an 18F positron source in a head phantom annihilated at energies as high as 0.56 MeV, producing 0.33-1.18 MeV photons 109-180° apart. 2.5% and 0.8% annihilation events occurred inflight in the chest and the head phantoms, respectively. PET textbooks typically either do not mention any deviation from 180°, or state a deviation of 0.25° or 0.5°. Our findings are founded on the well-established Heitler cross-sections and relativistic kinematics, both adopted unanimously by PENELOPE, EGSnrc and GEANT4. Our results highlight the effects of annihilation in-flight, a process sometimes forgotten within the PET community.

  19. Late Style in the Novels of Barbara Pym and Penelope Mortimer.

    ERIC Educational Resources Information Center

    Wyatt-Brown, Anne M.

    1988-01-01

    Evaluated novels written in old age by two contemporary British novelists, Barbara Pym and Penelope Mortimer, which revealed their late style was related to lifelong methods of adapting to changes in their situation. Found strength and direction of their creativity in old age fluctuated in response to their aging bodies and their new roles in…

  20. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.

  1. Integrity and security in an Ada runtime environment

    NASA Technical Reports Server (NTRS)

    Bown, Rodney L.

    1991-01-01

    A review is provided of the Formal Methods group discussions. It was stated that integrity is not a pure mathematical dual of security. The input data is part of the integrity domain. The group provided a roadmap for research. One item of the roadmap and the final position statement are closely related to the space shuttle and space station. The group's position is to use a safe subset of Ada. Examples of safe sets include the Army Secure Operating System and the Penelope Ada verification tool. It is recommended that a conservative attitude is required when writing Ada code for life and property critical systems.

  2. Using Penelope to assess the correctness of NASA Ada software: A demonstration of formal methods as a counterpart to testing

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Carl T.; Harper, C. Douglas; Hird, Geoffrey

    1993-01-01

    Life-critical applications warrant a higher level of software reliability than has yet been achieved. Since it is not certain that traditional methods alone can provide the required ultra reliability, new methods should be examined as supplements or replacements. This paper describes a mathematical counterpart to the traditional process of empirical testing. ORA's Penelope verification system is demonstrated as a tool for evaluating the correctness of Ada software. Grady Booch's Ada calendar utility package, obtained through NASA, was specified in the Larch/Ada language. Formal verification in the Penelope environment established that many of the package's subprograms met their specifications. In other subprograms, failed attempts at verification revealed several errors that had escaped detection by testing.

  3. Air-kerma strength determination of a miniature x-ray source for brachytherapy applications

    NASA Astrophysics Data System (ADS)

    Davis, Stephen D.

    A miniature x-ray source has been developed by Xoft Inc. for high dose-rate brachytherapy treatments. The source is contained in a 5.4 mm diameter water-cooling catheter. The source voltage can be adjusted from 40 kV to 50 kV and the beam current is adjustable up to 300 muA. Electrons are accelerated toward a tungsten-coated anode to produce a lightly-filtered bremsstrahlung photon spectrum. The sources were initially used for early-stage breast cancer treatment using a balloon applicator. More recently, Xoft Inc. has developed vaginal and surface applicators. The miniature x-ray sources have been characterized using a modification of the American Association of Physicists in Medicine Task Group No. 43 formalism normally used for radioactive brachytherapy sources. Primary measurements of air kerma were performed using free-air ionization chambers at the University of Wisconsin (UW) and the National Institute of Standards and Technology (NIST). The measurements at UW were used to calibrate a well-type ionization chamber for clinical verification of source strength. Accurate knowledge of the emitted photon spectrum was necessary to calculate the corrections required to determine air-kerma strength, defined in vacuo. Theoretical predictions of the photon spectrum were calculated using three separate Monte Carlo codes: MCNP5, EGSnrc, and PENELOPE. Each code used different implementations of the underlying radiological physics. Benchmark studies were performed to investigate these differences in detail. The most important variation among the codes was found to be the calculation of fluorescence photon production following electron-induced vacancies in the L shell of tungsten atoms. The low-energy tungsten L-shell fluorescence photons have little clinical significance at the treatment distance, but could have a large impact on air-kerma measurements. Calculated photon spectra were compared to spectra measured with high-purity germanium spectroscopy systems at both UW and NIST. The effects of escaped germanium fluorescence photons and Compton-scattered photons were taken into account for the UW measurements. The photon spectrum calculated using the PENELOPE Monte Carlo code had the best agreement with the spectrum measured at NIST. Corrections were applied to the free-air chamber measurements to arrive at an air-kerma strength determination for the miniature x-ray sources.

  4. New method for qualitative simulations of water resources systems. 2. Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antunes, M.P.; Seixas, M.J.; Camara, A.S.

    1987-11-01

    SLIN (Simulacao Linguistica) is a new method for qualitative dynamic simulation. As was presented previously, SLIN relies upon a categorical representation of variables which are manipulated by logical rules. Two applications to water resources systems are included to illustrate SLIN's potential usefulness: the environmental impact evaluation of a hydropower plant and the assessment of oil dispersion in the sea after a tanker wreck.

  5. Monte Carlo simulation of β γ coincidence system using plastic scintillators in 4π geometry

    NASA Astrophysics Data System (ADS)

    Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.

    2007-09-01

    A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.

  6. Extension of PENELOPE to protons: simulation of nuclear reactions and benchmark with Geant4.

    PubMed

    Sterpin, E; Sorriaux, J; Vynckier, S

    2013-11-01

    Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4. PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer-Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for (1)H and ICRU 63 data for (12)C, (14)N, (16)O, (31)P, and (40)Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth-dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth-dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone. For simulations with EM collisions only, integral depth-dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth-dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth-dose distributions). The agreement is much better with FLUKA, with deviations within 3%/3 mm. When nuclear interactions were turned on, agreement (within 6% before the Bragg-peak) between PENH and Geant4 was consistent with uncertainties on nuclear models and cross sections, whatever the material simulated (water, muscle, or bone). A detailed and flexible description of nuclear reactions has been implemented in the PENH extension of PENELOPE to protons, which utilizes a mixed-simulation scheme for both elastic and inelastic EM collisions, analogous to the well-established algorithm for electrons/positrons. PENH is compatible with all current main programs that use PENELOPE as the MC engine. The nuclear model of PENH is realistic enough to give dose distributions in fair agreement with those computed by Geant4.

  7. Extension of PENELOPE to protons: Simulation of nuclear reactions and benchmark with Geant4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterpin, E.; Sorriaux, J.; Vynckier, S.

    2013-11-15

    Purpose: Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4.Methods: PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac–Hartree–Fock–Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer–Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for {sup 1}H and ICRUmore » 63 data for {sup 12}C, {sup 14}N, {sup 16}O, {sup 31}P, and {sup 40}Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth–dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth–dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone.Results: For simulations with EM collisions only, integral depth–dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth–dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth–dose distributions). The agreement is much better with FLUKA, with deviations within 3%/3 mm. When nuclear interactions were turned on, agreement (within 6% before the Bragg-peak) between PENH and Geant4 was consistent with uncertainties on nuclear models and cross sections, whatever the material simulated (water, muscle, or bone).Conclusions: A detailed and flexible description of nuclear reactions has been implemented in the PENH extension of PENELOPE to protons, which utilizes a mixed-simulation scheme for both elastic and inelastic EM collisions, analogous to the well-established algorithm for electrons/positrons. PENH is compatible with all current main programs that use PENELOPE as the MC engine. The nuclear model of PENH is realistic enough to give dose distributions in fair agreement with those computed by Geant4.« less

  8. Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.

    2007-03-01

    In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.

  9. MAGIC-f Gel in Nuclear Medicine Dosimetry: study in an external beam of Iodine-131

    NASA Astrophysics Data System (ADS)

    Schwarcke, M.; Marques, T.; Garrido, C.; Nicolucci, P.; Baffa, O.

    2010-11-01

    MAGIC-f gel applicability in Nuclear Medicine dosimetry was investigated by exposure to a 131I source. Calibration was made to provide known absorbed doses in different positions around the source. The absorbed dose in gel was compared with a Monte Carlo Simulation using PENELOPE code and a thermoluminescent dosimetry (TLD). Using MRI analysis for the gel a R2-dose sensitivity of 0.23 s-1Gy-1was obtained. The agreement between dose-distance curves obtained with Monte Carlo simulation and TLD was better than 97% and for MAGIC-f and TLD was better than 98%. The results show the potential of polymer gel for application in nuclear medicine where three dimensional dose distribution is demanded.

  10. Comment on ‘Monte Carlo calculated microdosimetric spread for cell nucleus-sized targets exposed to brachytherapy 125I and 192Ir sources and 60Co cell irradiation’

    NASA Astrophysics Data System (ADS)

    Lindborg, Lennart; Lillhök, Jan; Grindborg, Jan-Erik

    2015-11-01

    The relative standard deviation, σr,D, of calculated multi-event distributions of specific energy for 60Co ϒ rays was reported by the authors F Villegas, N Tilly and A Ahnesjö (Phys. Med. Biol. 58 6149-62). The calculations were made with an upgraded version of the Monte Carlo code PENELOPE. When the results were compared to results derived from experiments with the variance method and simulated tissue equivalent volumes in the micrometre range a difference of about 50% was found. Villegas et al suggest wall-effects as the likely explanation for the difference. In this comment we review some publications on wall-effects and conclude that wall-effects are not a likely explanation.

  11. Comment on 'Monte Carlo calculated microdosimetric spread for cell nucleus-sized targets exposed to brachytherapy (125)I and (192)Ir sources and (60)Co cell irradiation'.

    PubMed

    Lindborg, Lennart; Lillhök, Jan; Grindborg, Jan-Erik

    2015-11-07

    The relative standard deviation, σr,D, of calculated multi-event distributions of specific energy for (60)Co ϒ rays was reported by the authors F Villegas, N Tilly and A Ahnesjö (Phys. Med. Biol. 58 6149-62). The calculations were made with an upgraded version of the Monte Carlo code PENELOPE. When the results were compared to results derived from experiments with the variance method and simulated tissue equivalent volumes in the micrometre range a difference of about 50% was found. Villegas et al suggest wall-effects as the likely explanation for the difference. In this comment we review some publications on wall-effects and conclude that wall-effects are not a likely explanation.

  12. 2016 Summer Series - Penelope Boston - Subsurface Astrobiology: Cave Habitats on Earth, Mars and Beyond

    NASA Image and Video Library

    2016-08-09

    In our quest to explore other planets, we only have our own planet as an analogue to the environments we may find life. By exploring extreme environments on Earth, we can model conditions that may be present on other celestial bodies and select locations to explore for signatures of life. Dr. Penelope Boston, the new director of the NASA Astrobiology Institute at Ames, will describe her work in some of Earth’s most diverse caves and how they inform future exploration of Mars and the search for life in our solar system.

  13. SU-E-T-467: Monte Carlo Dosimetric Study of the New Flexisource Co-60 High Dose Rate Source.

    PubMed

    Vijande, J; Granero, D; Perez-Calatayud, J; Ballester, F

    2012-06-01

    Recently, a new HDR 60Co brachytherapy source, Flexisource Co-60, has been developed (Nucletron B.V.). This study aims to obtain quality dosimetric data for this source for its use in clinical practice as required by AAPM and ESTRO. Penelope2008 and GEANT4 Monte Carlo codes were used to dosimetrically characterize this source. Water composition and mass density was that recommended by AAPM. Due to the high energy of the 60Co, dose for small distances cannot be approximated by collisional kerma. Therefore, we have considered absorbed dose to water for r<0.75 cm and collisional kerma from 0.75 0.8 cm and up to 2% closer to the source. Using Penelope2008 and GEANT4, an average of Î> = 1.085±0.003 cGy/(h U) (with k = 1, Type A uncertainties) was obtained. Dose rate constant, radial dose function and anisotropy functions for the Flexisource Co-60 are compared with published data for other Co-60 sources. Dosimetric data are provided for the new Flexisource Co-60 source not studied previously in the literature. Using the data provided by this study in the treatment planning systems, it can be used in clinical practice. This project has been funded by Nucletron BV. © 2012 American Association of Physicists in Medicine.

  14. Radial secondary electron dose profiles and biological effects in light-ion beams based on analytical and Monte Carlo calculations using distorted wave cross sections.

    PubMed

    Wiklund, Kristin; Olivera, Gustavo H; Brahme, Anders; Lind, Bengt K

    2008-07-01

    To speed up dose calculation, an analytical pencil-beam method has been developed to calculate the mean radial dose distributions due to secondary electrons that are set in motion by light ions in water. For comparison, radial dose profiles calculated using a Monte Carlo technique have also been determined. An accurate comparison of the resulting radial dose profiles of the Bragg peak for (1)H(+), (4)He(2+) and (6)Li(3+) ions has been performed. The double differential cross sections for secondary electron production were calculated using the continuous distorted wave-eikonal initial state method (CDW-EIS). For the secondary electrons that are generated, the radial dose distribution for the analytical case is based on the generalized Gaussian pencil-beam method and the central axis depth-dose distributions are calculated using the Monte Carlo code PENELOPE. In the Monte Carlo case, the PENELOPE code was used to calculate the whole radial dose profile based on CDW data. The present pencil-beam and Monte Carlo calculations agree well at all radii. A radial dose profile that is shallower at small radii and steeper at large radii than the conventional 1/r(2) is clearly seen with both the Monte Carlo and pencil-beam methods. As expected, since the projectile velocities are the same, the dose profiles of Bragg-peak ions of 0.5 MeV (1)H(+), 2 MeV (4)He(2+) and 3 MeV (6)Li(3+) are almost the same, with about 30% more delta electrons in the sub keV range from (4)He(2+)and (6)Li(3+) compared to (1)H(+). A similar behavior is also seen for 1 MeV (1)H(+), 4 MeV (4)He(2+) and 6 MeV (6)Li(3+), all classically expected to have the same secondary electron cross sections. The results are promising and indicate a fast and accurate way of calculating the mean radial dose profile.

  15. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  16. Self-triggering readout system for the neutron lifetime experiment PENeLOPE

    NASA Astrophysics Data System (ADS)

    Gaisbauer, D.; Bai, Y.; Konorov, I.; Paul, S.; Steffen, D.

    2016-02-01

    PENeLOPE is a neutron lifetime measurement developed at the Technische Universität München and located at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II) aiming to achieve a precision of 0.1 seconds. The detector for PENeLOPE consists of about 1250 Avalanche Photodiodes (APDs) with a total active area of 1225 cm2. The decay proton detector and electronics will be operated at a high electrostatic potential of -30 kV and a magnetic field of 0.6 T. This includes shaper, preamplifier, ADC and FPGA cards. In addition, the APDs will be cooled to 77 K. The 1250 APDs are divided into 14 groups of 96 channels, including spares. A 12-bit ADC digitizes the detector signals with 1 MSps. A firmware was developed for the detector including a self-triggering readout with continuous pedestal calculation and configurable signal detection. The data transmission and configuration is done via the Switched Enabling Protocol (SEP). It is a time-division multiplexing low layer protocol which provides determined latency for time critical messages, IPBus, and JTAG interfaces. The network has a n:1 topology, reducing the number of optical links.

  17. The web of Penelope. Regulating women's night work: an unfinished job?

    PubMed

    Riva, Michele A; Scordo, Francesco; Turato, Massimo; Messina, Giovanni; Cesana, Giancarlo

    2015-12-01

    Even though unhealthy consequences of night work for women have been evidenced by international scientific literature only in recent years, they were well acknowledged from ancient times. This essay traces the historical evolution of women's health conditions at work, focusing specifically on nocturnal work. Using the legendary web of Penelope of ancient Greek myths as a metaphor, the paper analyses the early limitations of night-work for women in pre-industrial era and the development of a modern international legislation on this issue, aimed at protecting women's health at the beginning of the twentieth century. The reform of national legislations in a gender-neutral manner has recently abolished gender disparities in night-work, but it seems it also reduced women's protection at work.

  18. [Twentieth-century Penelopes: popular culture revisited].

    PubMed

    Favaro, Cleci Eulalia

    2010-01-01

    During their settlement of the so-called Old Italian Colonies of Rio Grande do Sul, immigrants constructed a set of positive values that were to serve as an emotional support and a means of outside communication. When women immigrants embroidered images and sayings on wall hangings or kitchen towels made of rustic fabric, they helped nourish the dream of a better life, sought by all and achieved by some. The objects crafted by these twentieth-century Penelopes bear witness to a way of doing, thinking, and acting. Local museums and exhibits have fostered the recovery of old-time embroidery techniques and themes; sold at open-air markets and regional festivals, these products represent income for women whose age excludes them from the formal labor market.

  19. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, Andreu; Badano, Aldo

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less

  20. MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes

    NASA Astrophysics Data System (ADS)

    Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.

    2017-11-01

    The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.

  1. Implementation of new physics models for low energy electrons in liquid water in Geant4-DNA.

    PubMed

    Bordage, M C; Bordes, J; Edel, S; Terrissol, M; Franceries, X; Bardiès, M; Lampe, N; Incerti, S

    2016-12-01

    A new alternative set of elastic and inelastic cross sections has been added to the very low energy extension of the Geant4 Monte Carlo simulation toolkit, Geant4-DNA, for the simulation of electron interactions in liquid water. These cross sections have been obtained from the CPA100 Monte Carlo track structure code, which has been a reference in the microdosimetry community for many years. They are compared to the default Geant4-DNA cross sections and show better agreement with published data. In order to verify the correct implementation of the CPA100 cross section models in Geant4-DNA, simulations of the number of interactions and ranges were performed using Geant4-DNA with this new set of models, and the results were compared with corresponding results from the original CPA100 code. Good agreement is observed between the implementations, with relative differences lower than 1% regardless of the incident electron energy. Useful quantities related to the deposited energy at the scale of the cell or the organ of interest for internal dosimetry, like dose point kernels, are also calculated using these new physics models. They are compared with results obtained using the well-known Penelope Monte Carlo code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Renewable Energy Potential for New Mexico

    EPA Pesticide Factsheets

    RE-Powering America's Land: Renewable Energy on Contaminated Land and Mining Sites was presented by Penelope McDaniel, during the 2008 Brown to Green: Make the Connection to Renewable Energy workshop.

  3. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces

    NASA Astrophysics Data System (ADS)

    Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc

    2016-02-01

    The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.

  4. NOTE: Monte Carlo simulation of correction factors for IAEA TLD holders

    NASA Astrophysics Data System (ADS)

    Hultqvist, Martha; Fernández-Varea, José M.; Izewska, Joanna

    2010-03-01

    The IAEA standard thermoluminescent dosimeter (TLD) holder has been developed for the IAEA/WHO TLD postal dose program for audits of high-energy photon beams, and it is also employed by the ESTRO-QUALity assurance network (EQUAL) and several national TLD audit networks. Factors correcting for the influence of the holder on the TL signal under reference conditions have been calculated in the present work from Monte Carlo simulations with the PENELOPE code for 60Co γ-rays and 4, 6, 10, 15, 18 and 25 MV photon beams. The simulation results are around 0.2% smaller than measured factors reported in the literature, but well within the combined standard uncertainties. The present study supports the use of the experimentally obtained holder correction factors in the determination of the absorbed dose to water from the TL readings; the factors calculated by means of Monte Carlo simulations may be adopted for the cases where there are no measured data.

  5. Analytical response function for planar Ge detectors

    NASA Astrophysics Data System (ADS)

    García-Alvarez, Juan A.; Maidana, Nora L.; Vanin, Vito R.; Fernández-Varea, José M.

    2016-04-01

    We model the response function (RF) of planar HPGe x-ray spectrometers for photon energies between around 10 keV and 100 keV. The RF is based on the proposal of Seltzer [1981. Nucl. Instrum. Methods 188, 133-151] and takes into account the full-energy absorption in the Ge active volume, the escape of Ge Kα and Kβ x-rays and the escape of photons after one Compton interaction. The relativistic impulse approximation is employed instead of the Klein-Nishina formula to describe incoherent photon scattering in the Ge crystal. We also incorporate a simple model for the continuous component of the spectrum produced by the escape of photo-electrons from the active volume. In our calculations we include external interaction contributions to the RF: (i) the incoherent scattering effects caused by the detector's Be window and (ii) the spectrum produced by photo-electrons emitted in the Ge dead layer that reach the active volume. The analytical RF model is compared with pulse-height spectra simulated using the PENELOPE Monte Carlo code.

  6. Systematic discrepancies in Monte Carlo predictions of k-ratios emitted from thin films on substrates

    NASA Astrophysics Data System (ADS)

    Statham, P.; Llovet, X.; Duncumb, P.

    2012-03-01

    We have assessed the reliability of different Monte Carlo simulation programmes using the two available Bastin-Heijligers databases of thin-film measurements by EPMA. The MC simulation programmes tested include Curgenven-Duncumb MSMC, NISTMonte, Casino and PENELOPE. Plots of the ratio of calculated to measured k-ratios ("kcalc/kmeas") against various parameters reveal error trends that are not apparent in simple error histograms. The results indicate that the MC programmes perform quite differently on the same dataset. However, they appear to show a similar pronounced trend with a "hockey stick" shape in the "kcalc/kmeas versus kmeas" plots. The most sophisticated programme PENELOPE gives the closest correspondence with experiment but still shows a tendency to underestimate experimental k-ratios by 10 % for films that are thin compared to the electron range. We have investigated potential causes for this systematic behaviour and extended the study to data not collected by Bastin and Heijligers.

  7. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    PubMed

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  8. TH-AB-201-09 [Medical Physics, Jun 2016, v. 43(6)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirzakhanian, L; Benmakhlouf, H; Seuntjens, J

    2016-06-15

    Purpose: To determine the k-(Q-msr,Q)^(f-msr,f-ref ) factor, introduced in the small field formalism for five common type chambers used in the calibration of Leksell Gamma-Knife Perfexion model over a range of different phantom electron densities. Methods: Five chamber types including Exradin-A16, A14SL, A14, A1SL and IBA-CC04 are modeled in EGSnrc and PENELOPE Monte Carlo codes using the blueprints provided by the manufacturers. The chambers are placed in a previously proposed water-filled phantom and four 16-cm diameter spherical phantoms made of liquid water, Solid Water, ABS and polystyrene. Dose to the cavity of the chambers and a small water volume aremore » calculated using EGSnrc/PENELOPE codes. The calculations are performed over a range of phantom electron densities for two chamber orientations. Using the calculated dose-ratio in reference and machine specific reference field, the k-(Q-msr,Q)^(f-msr,f-ref ) factor can be determined. Results: When chambers are placed along the symmetry axis of the collimator block (z-axis), the CC04 requires the smallest correction followed by A1SL and A16. However, when detectors are placed perpendicular to z-axis, A14SL needs the smallest and A16 the largest correction. Moreover, an increase in the phantom electron density results in a linear increase in the k-(Q-msr,Q)^(f-msr,f-ref ). Depending on the chambers, the agreement between this study and a previous study performed varies between 0.05–0.70% for liquid water, 0.07–0.85% for Solid Water and 0.00–0.60% for ABS phantoms. After applying the EGSnrc-calculated k-(Q-msr,Q)^(f-msr,f-ref ) factors for A16 to the previously measured dose-rates in liquid water, Solid Water and ABS normalized to the dose-rate measured with TG-21 protocol and ABS phantom, the dose-rate ratios are found to be 1.004±0.002, 0.996±0.002 and 0.998±0.002 (3σ) respectively. Conclusion: Knowing the electron density of the phantoms, the calculated k-(Q-msr,Q)^(f-msr,f-ref ) values in this work will enable users to apply the appropriate correction for their own specific phantom material. LM acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)« less

  9. hybrid\\scriptsize{{MANTIS}}: a CPU-GPU Monte Carlo method for modeling indirect x-ray detectors with columnar scintillators

    NASA Astrophysics Data System (ADS)

    Sharma, Diksha; Badal, Andreu; Badano, Aldo

    2012-04-01

    The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result, allows us to efficiently simulate large area detectors.

  10. Modeling the Efficiency of a Germanium Detector

    NASA Astrophysics Data System (ADS)

    Hayton, Keith; Prewitt, Michelle; Quarles, C. A.

    2006-10-01

    We are using the Monte Carlo Program PENELOPE and the cylindrical geometry program PENCYL to develop a model of the detector efficiency of a planar Ge detector. The detector is used for x-ray measurements in an ongoing experiment to measure electron bremsstrahlung. While we are mainly interested in the efficiency up to 60 keV, the model ranges from 10.1 keV (below the Ge absorption edge at 11.1 keV) to 800 keV. Measurements of the detector efficiency have been made in a well-defined geometry with calibrated radioactive sources: Co-57, Se-75, Ba-133, Am-241 and Bi-207. The model is compared with the experimental measurements and is expected to provide a better interpolation formula for the detector efficiency than simply using x-ray absorption coefficients for the major constituents of the detector. Using PENELOPE, we will discuss several factors, such as Ge dead layer, surface ice layer and angular divergence of the source, that influence the efficiency of the detector.

  11. Not-so-Soft Skills

    ERIC Educational Resources Information Center

    Curran, Mary

    2010-01-01

    Much recent discussion about the skills needed to secure Britain's economic recovery has focused on skills for employability. However, too often, these fundamental skills are understood in narrow functional or vocational terms. So-called "soft skills", what Penelope Tobin, in her 2008 paper "Soft Skills: the hard facts", terms "traits and…

  12. Magnetic field effects on the energy deposition spectra of MV photon radiation.

    PubMed

    Kirkby, C; Stanescu, T; Fallone, B G

    2009-01-21

    Several groups worldwide have proposed various concepts for improving megavoltage (MV) radiotherapy that involve irradiating patients in the presence of a magnetic field-either for image guidance in the case of hybrid radiotherapy-MRI machines or for purposes of introducing tighter control over dose distributions. The presence of a magnetic field alters the trajectory of charged particles between interactions with the medium and thus has the potential to alter energy deposition patterns within a sub-cellular target volume. In this work, we use the MC radiation transport code PENELOPE with appropriate algorithms invoked to incorporate magnetic field deflections to investigate electron energy fluence in the presence of a uniform magnetic field and the energy deposition spectra within a 10 microm water sphere as a function of magnetic field strength. The simulations suggest only very minor changes to the electron fluence even for extremely strong magnetic fields. Further, calculations of the dose-averaged lineal energy indicate that a magnetic field strength of at least 70 T is required before beam quality will change by more than 2%.

  13. Detour factors in water and plastic phantoms and their use for range and depth scaling in electron-beam dosimetry.

    PubMed

    Fernández-Varea, J M; Andreo, P; Tabata, T

    1996-07-01

    Average penetration depths and detour factors of 1-50 MeV electrons in water and plastic materials have been computed by means of analytical calculation, within the continuous-slowing-down approximation and including multiple scattering, and using the Monte Carlo codes ITS and PENELOPE. Results are compared to detour factors from alternative definitions previously proposed in the literature. Different procedures used in low-energy electron-beam dosimetry to convert ranges and depths measured in plastic phantoms into water-equivalent ranges and depths are analysed. A new simple and accurate scaling method, based on Monte Carlo-derived ratios of average electron penetration depths and thus incorporating the effect of multiple scattering, is presented. Data are given for most plastics used in electron-beam dosimetry together with a fit which extends the method to any other low-Z plastic material. A study of scaled depth-dose curves and mean energies as a function of depth for some plastics of common usage shows that the method improves the consistency and results of other scaling procedures in dosimetry with electron beams at therapeutic energies.

  14. Learning from Our Lives: Women, Research, and Autobiography in Education.

    ERIC Educational Resources Information Center

    Neumann, Anna, Ed.; Peterson, Penelope L., Ed.

    The autobiographical essays in this volume offer insights into how the field of education might change as women assume positions of intellectual leadership. After the "Foreword" (Mary Catherine Bateson), the 13 chapters are: (1) "Research Lives: Women, Scholarship, and Autobiography in Education" (Anna Neumann and Penelope L.…

  15. Effects of Cooperative Learning on Achievement and Attitude among Students of Color.

    ERIC Educational Resources Information Center

    Vaughan, Winston

    2002-01-01

    Investigated the effects of cooperative learning on achievement in and attitudes toward mathematics among fifth graders of color in a culture different from that of the United States (Bermuda). Participants completed parts of the California Achievement Test and Penelope Peterson's Attitude Toward Mathematics Scale. Pre-test and post-test data…

  16. Physical reparative treatment in reptiles

    PubMed Central

    2013-01-01

    Background The tissue growth necessary to achieve a complete or partial restitution ad integrum as a result of injury to soft tissue and/or hard times in reptiles is variable and often needs long time in relation to the species, to the habitat and to their intrinsic physiological characteristics. The purpose of this work was to see if the tissue optimization (TO) treatment with radio electric asymmetric conveyer (REAC) provided good results in these animals and whether its use translates into reduced time of tissue repair. This paper describes preliminary results with in promoting the tissue repair in reptiles. Cases presentation A 5 year old male Testudo graeca (Leo) and Trachemys scripta scripta (Mir) and a 15 year old female Testudo hermanni (Juta) were evaluated because of soft tissue injuries. A female 25 year old Trachemys scripta elegans (Ice), a female 2.5 year old Trachemys scripta scripta (Penelope) as well as a 50 year old male Testudo graeca (Margherito) were evaluated because of wounds of the carapace. Following debridement and traditional therapies, Leo, Penelope and Margherito were exposed to the radio electric asymmetric conveyer (REAC) device, with a specific treatment protocol, named tissue optimization-basic (TO-B). Also Ice and Mir were subjected to REAC treatment after wounds debridement. Juta was treated only with REAC treatment. Complete wound healing was evident after 17 days for Leo, 7 days for Penelope, 27 days for Mir, 78 days for Ice and after 14 days for Margherito. Juta showed a considerable tissue activation in 2 days and complete wound healing in 5 days. Conclusion Our findings suggest that REAC TO-B treatment may provide advantages over other traditional methods after complete wound healing in Leo, and also suitable healing in the other patients. Then REAC device with its specific treatment TO-B protocol, which induces tissue repair without causing severe stress to the patient, could be a potential therapy for tissue damage healing in reptiles. Further studies still need to be conducted to support our observations. PMID:23442770

  17. Comparisons of Wartime and Peacetime Disease and Non-Battle Injury Rates Aboard Ships of the British Royal Navy

    DTIC Science & Technology

    1991-06-01

    Kenya Cleopatra Maur itius Enterprise Sirius Kenya Sussex Mauritius Orion Penelope Si rius Sus sex Detors Aligt1ion Ledbury Anthony Liddesdale Beaufurt...Diseases of the Rectum and Anus Diseases of the Liver Other Digestive Diseases DISEASES OF NUTRITION OR METABOLISM Scurvy Beri-Beri Gout Diabetes

  18. Where Are All the Black Teachers? Discrimination in the Teacher Labor Market

    ERIC Educational Resources Information Center

    D'amico, Diana; Pawlewicz, Robert J.; Earley, Penelope M.; McGeehan, Adam P.

    2017-01-01

    In this article, Diana D'Amico, Robert J. Pawlewicz, Penelope M. Earley, and Adam P. McGeehan examine the racial composition of one public school district's teacher labor market through teacher application data and subsequent hiring decisions. Researchers and policy makers have long noted the lack of racial diversity among the nation's public…

  19. A resolution commemorating the 80th anniversary of the Daughters of Penelope, a preeminent international women's association and affiliate organization of the American Hellenic Educational Progressive Association (AHEPA).

    THOMAS, 111th Congress

    Sen. Snowe, Olympia J. [R-ME

    2009-04-29

    Senate - 03/26/2010 Resolution agreed to in Senate without amendment and with a preamble by Unanimous Consent. (All Actions) Tracker: This bill has the status Agreed to in SenateHere are the steps for Status of Legislation:

  20. Characterization and Simulation of a New Design Parallel-Plate Ionization Chamber for CT Dosimetry at Calibration Laboratories

    NASA Astrophysics Data System (ADS)

    Perini, Ana P.; Neves, Lucio P.; Maia, Ana F.; Caldas, Linda V. E.

    2013-12-01

    In this work, a new extended-length parallel-plate ionization chamber was tested in the standard radiation qualities for computed tomography established according to the half-value layers defined at the IEC 61267 standard, at the Calibration Laboratory of the Instituto de Pesquisas Energéticas e Nucleares (IPEN). The experimental characterization was made following the IEC 61674 standard recommendations. The experimental results obtained with the ionization chamber studied in this work were compared to those obtained with a commercial pencil ionization chamber, showing a good agreement. With the use of the PENELOPE Monte Carlo code, simulations were undertaken to evaluate the influence of the cables, insulator, PMMA body, collecting electrode, guard ring, screws, as well as different materials and geometrical arrangements, on the energy deposited on the ionization chamber sensitive volume. The maximum influence observed was 13.3% for the collecting electrode, and regarding the use of different materials and design, the substitutions showed that the original project presented the most suitable configuration. The experimental and simulated results obtained in this work show that this ionization chamber has appropriate characteristics to be used at calibration laboratories, for dosimetry in standard computed tomography and diagnostic radiology quality beams.

  1. Conception and realization of a parallel-plate free-air ionization chamber for the absolute dosimetry of an ultrasoft X-ray beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groetz, J.-E., E-mail: jegroetz@univ-fcomte.fr; Mavon, C.; Fromm, M.

    2014-08-15

    We report the design of a millimeter-sized parallel plate free-air ionization chamber (IC) aimed at determining the absolute air kerma rate of an ultra-soft X-ray beam (E = 1.5 keV). The size of the IC was determined so that the measurement volume satisfies the condition of charged-particle equilibrium. The correction factors necessary to properly measure the absolute kerma using the IC have been established. Particular attention was given to the determination of the effective mean energy for the 1.5 keV photons using the PENELOPE code. Other correction factors were determined by means of computer simulation (COMSOL™and FLUKA). Measurements of airmore » kerma rates under specific operating parameters of the lab-bench X-ray source have been performed at various distances from that source and compared to Monte Carlo calculations. We show that the developed ionization chamber makes it possible to determine accurate photon fluence rates in routine work and will constitute substantial time-savings for future radiobiological experiments based on the use of ultra-soft X-rays.« less

  2. Efficient Simulation of Secondary Fluorescence Via NIST DTSA-II Monte Carlo.

    PubMed

    Ritchie, Nicholas W M

    2017-06-01

    Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence-PENEPMA and NIST DTSA-II a (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.

  3. Evaluation of computational models and cross sections used by MCNP6 for simulation of characteristic X-ray emission from thick targets bombarded by kiloelectronvolt electrons

    NASA Astrophysics Data System (ADS)

    Poškus, A.

    2016-09-01

    This paper evaluates the accuracy of the single-event (SE) and condensed-history (CH) models of electron transport in MCNP6.1 when simulating characteristic Kα, total K (=Kα + Kβ) and Lα X-ray emission from thick targets bombarded by electrons with energies from 5 keV to 30 keV. It is shown that the MCNP6.1 implementation of the CH model for the K-shell impact ionization leads to underestimation of the K yield by 40% or more for the elements with atomic numbers Z < 15 and overestimation of the Kα yield by more than 40% for the elements with Z > 25. The Lα yields are underestimated by more than an order of magnitude in CH mode, because MCNP6.1 neglects X-ray emission caused by electron-impact ionization of L, M and higher shells in CH mode (the Lα yields calculated in CH mode reflect only X-ray fluorescence, which is mainly caused by photoelectric absorption of bremsstrahlung photons). The X-ray yields calculated by MCNP6.1 in SE mode (using ENDF/B-VII.1 library data) are more accurate: the differences of the calculated and experimental K yields are within the experimental uncertainties for the elements C, Al and Si, and the calculated Kα yields are typically underestimated by (20-30)% for the elements with Z > 25, whereas the Lα yields are underestimated by (60-70)% for the elements with Z > 49. It is also shown that agreement of the experimental X-ray yields with those calculated in SE mode is additionally improved by replacing the ENDF/B inner-shell electron-impact ionization cross sections with the set of cross sections obtained from the distorted-wave Born approximation (DWBA), which are also used in the PENELOPE code system. The latter replacement causes a decrease of the average relative difference of the experimental X-ray yields and the simulation results obtained in SE mode to approximately 10%, which is similar to accuracy achieved with PENELOPE. This confirms that the DWBA inner-shell impact ionization cross sections are significantly more accurate than the corresponding ENDF/B cross sections when energy of incident electrons is of the order of the binding energy.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moura, Eduardo S., E-mail: emoura@wisc.edu; Micka, John A.; Hammer, Cliff G.

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. Tomore » compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The TPS relative differences with the Acuros™ algorithm were similar in both experimental and simulated setups. The discrepancy between the BrachyVision™, Acuros™, and TG-43 dose responses in the phantom described by this work exceeded 12% for certain setups. Conclusions: The results derived from the phantom measurements show good agreement with the simulations and TPS calculations, using Acuros™ algorithm. Differences in the dose responses were evident in the experimental results when heterogeneous materials were introduced. These measurements prove the usefulness of the heterogeneous phantom for verification of HDR treatment planning systems based on model-based dose calculation algorithms.« less

  5. An ancient trans-kingdom horizontal transfer of Penelope -like retroelements from arthropods to conifers

    Treesearch

    Xuan Lin; Nurul Faridi; Claudio Casola

    2016-01-01

    Comparative genomics analyses empowered by the wealth of sequenced genomes have revealed numerous instances of horizontal DNA transfers between distantly related species. In  eukaryotes, repetitive DNA sequences known as transposable elements (TEs) are especially prone to  move across species boundaries. Such horizontal transposon transfers, or HTTs, are relatively  ...

  6. The impact of anthropogenic food supply on fruit consumption by dusky-legged guan (Penelope obscura Temminck, 1815): potential effects on seed dispersal in an Atlantic forest area.

    PubMed

    Vasconcellos-Neto, J; Ramos, R R; Pinto, L P

    2015-11-01

    Frugivorous birds are important seed dispersers and influence the recruitment of many plant species in the rainforest. The efficiency of this dispersal generally depends on environment quality, bird species, richness and diversity of resources, and low levels of anthropogenic disturbance. In this study, we compared the sighting number of dusky-legged guans (Penelope obscura) by km and their movement in two areas of Serra do Japi, one around the administrative base (Base) where birds received anthropogenic food and a pristine area (DAE) with no anthropogenic resource. We also compared the richness of native seeds in feces of birds living in these two areas. Although the abundance of P. obscura was higher in the Base, these individuals moved less, dispersed 80% fewer species of plants and consumed 30% fewer seeds than individuals from DAE. The rarefaction indicated a low richness in the frugivorous diet of birds from the Base when compared to the populations from DAE. We conclude that human food supply can interfere in the behavior of these birds and in the richness of native seeds dispersed.

  7. A PENELOPE-based system for the automated Monte Carlo simulation of clinacs and voxelized geometries - application to far-from-axis fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sempau, Josep; Badal, Andreu; Brualla, Lorenzo

    Purpose: Two new codes, PENEASY and PENEASYLINAC, which automate the Monte Carlo simulation of Varian Clinacs of the 600, 1800, 2100, and 2300 series, together with their electron applicators and multileaf collimators, are introduced. The challenging case of a relatively small and far-from-axis field has been studied with these tools. Methods: PENEASY is a modular, general-purpose main program for the PENELOPE Monte Carlo system that includes various source models, tallies and variance-reduction techniques (VRT). The code includes a new geometry model that allows the superposition of voxels and objects limited by quadric surfaces. A variant of the VRT known asmore » particle splitting, called fan splitting, is also introduced. PENEASYLINAC, in turn, automatically generates detailed geometry and configuration files to simulate linacs with PENEASY. These tools are applied to the generation of phase-space files, and of the corresponding absorbed dose distributions in water, for two 6 MV photon beams from a Varian Clinac 2100 C/D: a 40 x 40 cm{sup 2} centered field; and a 3 x 5 cm{sup 2} field centered at (4.5, -11.5) cm from the beam central axis. This latter configuration implies the largest possible over-traveling values of two of the jaws. Simulation results for the depth dose and lateral profiles at various depths are compared, by using the gamma index, with experimental values obtained with a PTW 31002 ionization chamber. The contribution of several VRTs to the computing speed of the more demanding off-axis case is analyzed. Results: For the 40 x 40 cm{sup 2} field, the percentages {gamma}{sub 1} and {gamma}{sub 1.2} of voxels with gamma indices (using 0.2 cm and 2% criteria) larger than unity and larger than 1.2 are 0.2% and 0%, respectively. For the 3 x 5 cm{sup 2} field, {gamma}{sub 1} = 0%. These figures indicate an excellent agreement between simulation and experiment. The dose distribution for the off-axis case with voxels of 2.5 x 2.5 x 2.5 mm{sup 3} and an average standard statistical uncertainty of 2% (1{sigma}) is computed in 3.1 h on a single core of a 2.8 GHz Intel Core 2 Duo processor. This result is obtained with the optimal combination of the tested VRTs. In particular, fan splitting for the off-axis case accelerates execution by a factor of 240 with respect to standard particle splitting. Conclusions: PENEASY and PENEASYLINAC can simulate the considered Varian Clinacs both in an accurate and efficient manner. Fan splitting is crucial to achieve simulation results for the off-axis field in an affordable amount of CPU time. Work to include Elekta linacs and to develop a graphical interface that will facilitate user input is underway.« less

  8. Deaths among Wild Birds during Highly Pathogenic Avian Influenza A(H5N8) Virus Outbreak, the Netherlands.

    PubMed

    Kleyheeg, Erik; Slaterus, Roy; Bodewes, Rogier; Rijks, Jolianne M; Spierenburg, Marcel A H; Beerens, Nancy; Kelder, Leon; Poen, Marjolein J; Stegeman, Jan A; Fouchier, Ron A M; Kuiken, Thijs; van der Jeugd, Henk P

    2017-12-01

    During autumn-winter 2016-2017, highly pathogenic avian influenza A(H5N8) viruses caused mass die-offs among wild birds in the Netherlands. Among the ≈13,600 birds reported dead, most were tufted ducks (Aythya fuligula) and Eurasian wigeons (Anas penelope). Recurrence of avian influenza outbreaks might alter wild bird population dynamics.

  9. Culturally Responsive Education: Diversity in Our Classrooms. Frequently Asked Questions for Saundra Tomlinson-Clarke, Ph.D., and Penelope Lattimer, Ph.D. REL Mid-Atlantic Webinar

    ERIC Educational Resources Information Center

    Regional Educational Laboratory Mid-Atlantic, 2015

    2015-01-01

    The webinar, "Culturally Responsive Education: Diversity in Our Classrooms" focused on the concept of culturally responsive education. Goals of the webinar included: (1) Learning what the research says about effective ways to promote culturally responsive education; (2) Discussion of ways to meet the cultural and linguistic needs of all…

  10. Leading change in healthcare Anthony L Suchman , David J Sluyter and Penelope R Williamson Leading change in healthcare Radcliffe £35 362pp 9781846194481 1846194482 [Formula: see text].

    PubMed

    2012-04-20

    WHEN A book promises to offer an alternative to the 'antiquated and psychologically unsophisticated theories' that have underpinned the leadership of change programmes to date, you can expect the interest of any forward-thinking manager, clinician or academic in the field to be piqued.

  11. Deaths among Wild Birds during Highly Pathogenic Avian Influenza A(H5N8) Virus Outbreak, the Netherlands

    PubMed Central

    Slaterus, Roy; Bodewes, Rogier; Rijks, Jolianne M.; Spierenburg, Marcel A.H.; Beerens, Nancy; Kelder, Leon; Poen, Marjolein J.; Stegeman, Jan A.; Fouchier, Ron A.M.; Kuiken, Thijs; van der Jeugd, Henk P.

    2017-01-01

    During autumn–winter 2016–2017, highly pathogenic avian influenza A(H5N8) viruses caused mass die-offs among wild birds in the Netherlands. Among the ≈13,600 birds reported dead, most were tufted ducks (Aythya fuligula) and Eurasian wigeons (Anas penelope). Recurrence of avian influenza outbreaks might alter wild bird population dynamics. PMID:29148372

  12. Characterization and validation of the thorax phantom Lungman for dose assessment in chest radiography optimization studies.

    PubMed

    Rodríguez Pérez, Sunay; Marshall, Nicholas William; Struelens, Lara; Bosmans, Hilde

    2018-01-01

    This work concerns the validation of the Kyoto-Kagaku thorax anthropomorphic phantom Lungman for use in chest radiography optimization. The equivalence in terms of polymethyl methacrylate (PMMA) was established for the lung and mediastinum regions of the phantom. Patient chest examination data acquired under automatic exposure control were collated over a 2-year period for a standard x-ray room. Parameters surveyed included exposure index, air kerma area product, and exposure time, which were compared with Lungman values. Finally, a voxel model was developed by segmenting computed tomography images of the phantom and implemented in PENELOPE/penEasy Monte Carlo code to compare phantom tissue-equivalent materials with materials from ICRP Publication 89 in terms of organ dose. PMMA equivalence varied depending on tube voltage, from 9.5 to 10.0 cm and from 13.5 to 13.7 cm, for the lungs and mediastinum regions, respectively. For the survey, close agreement was found between the phantom and the patients' median values (deviations lay between 8% and 14%). Differences in lung doses, an important organ for optimization in chest radiography, were below 13% when comparing the use of phantom tissue-equivalent materials versus ICRP materials. The study confirms the value of the Lungman for chest optimization studies.

  13. Absorbed dose calculations in a brachytherapy pelvic phantom using the Monte Carlo method

    PubMed Central

    Rodríguez, Miguel L.; deAlmeida, Carlos E.

    2002-01-01

    Monte Carlo calculations of the absorbed dose at various points of a brachytherapy anthropomorphic phantom are presented. The phantom walls and internal structures are made of polymethylmethacrylate and its external shape was taken from a female Alderson phantom. A complete Fletcher‐Green type applicator with the uterine tandem was fixed at the bottom of the phantom reproducing a typical geometrical configuration as that attained in a gynecological brachytherapy treatment. The dose rate produced by an array of five 137Cs CDC‐J type sources placed in the applicator colpostats and the uterine tandem was evaluated by Monte Carlo simulations using the code penelope at three points: point A, the rectum, and the bladder. The influence of the applicator in the dose rate was evaluated by comparing Monte Carlo simulations of the sources alone and the sources inserted in the applicator. Differences up to 56% in the dose may be observed for the two cases in the planes including the rectum and bladder. The results show a reduction of the dose of 15.6%, 14.0%, and 5.6% in the rectum, bladder, and point A respectively, when the applicator wall and shieldings are considered. PACS number(s): 87.53Jw, 87.53.Wz, 87.53.Vb, 87.66.Xa PMID:12383048

  14. The light output and the detection efficiency of the liquid scintillator EJ-309.

    PubMed

    Pino, F; Stevanato, L; Cester, D; Nebbia, G; Sajo-Bohus, L; Viesti, G

    2014-07-01

    The light output response and the neutron and gamma-ray detection efficiency are determined for liquid scintillator EJ-309. The light output function is compared to those of previous studies. Experimental efficiency results are compared to predictions from GEANT4, MCNPX and PENELOPE Monte Carlo simulations. The differences associated with the use of different light output functions are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Influenza A(H5N8) virus isolation in Russia, 2014.

    PubMed

    Marchenko, Vasiliy Y; Susloparov, Ivan M; Kolosova, Nataliya P; Goncharova, Nataliya I; Shipovalov, Andrey V; Durymanov, Alexander G; Ilyicheva, Tatyana N; Budatsirenova, Lubov V; Ivanova, Valentina K; Ignatyev, Georgy A; Ershova, Svetlana N; Tulyahova, Valeriya S; Mikheev, Valeriy N; Ryzhikov, Alexander B

    2015-11-01

    In this study, we report the isolation of influenza A(H5N8) virus from a Eurasian wigeon (Anas penelope) in Sakha Republic of the Russian Far East. The strain A/wigeon/Sakha/1/2014 (H5N8) has been shown to be pathogenic for mammals. It is similar to the strains that caused outbreaks in wild birds and poultry in Southeast Asia and Europe in 2014.

  16. The complexities of female aging: Four women protagonists in Penelope Lively's novels.

    PubMed

    Oró-Piqueras, Maricel

    2016-01-01

    Penelope Lively is a well-known contemporary British author who has published a good number of novels and short stories since she started her literary career in her late thirties. In her novels, Lively looks at the lives of contemporary characters moulded by specific historical as well as cultural circumstances. Four of her novels, published from 1987 to 2004, present middle-aged and older women as their main protagonists. Through the voices and thoughts of these female characters, the reader is presented with a multiplicity of realities in which women find themselves after their mid-fifties within a contemporary context. Being a woman and entering into old age is a double-sided jeopardy which has increasingly been present in contemporary fiction. Scholars such as Simone de Beauvoir (1949) and Susan Sontag (1972) were among the first to point out a "double standard of aging" when they assured that women were punished when showing external signs of aging much sooner than men. In Lively's four novels, the aging protagonists present their own stories and, through them, as well as through the voices of those around them, the reader is invited to go beyond the aging appearance of the female protagonists while challenging the limiting conceptions attached to the old body and, by extension, to the social and cultural overtones associated with old age. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    PubMed

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  18. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  19. Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2005-01-01

    Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.

  20. Monte Carlo simulation studies for the determination of microcalcification thickness and glandular ratio through dual-energy mammography

    NASA Astrophysics Data System (ADS)

    Del Lama, L. S.; Godeli, J.; Poletti, M. E.

    2017-08-01

    The majority of breast carcinomas can be associated to the presence of calcifications before the development of a mass. However, the overlapping tissues can obscure the visualization of microcalcification clusters due to the reduced contrast-noise ratio (CNR). In order to overcome this complication, one potential solution is the use of the dual-energy (DE) technique, in which two different images are acquired at low (LE) and high (HE) energies or kVp to highlight specific lesions or cancel out tissue background. In this work, the DE features were computationally studied considering simulated acquisitions from a modified PENELOPE Monte Carlo code. The employed irradiation geometry considered typical distances used in digital mammography, a CsI detection system and an updated breast model composed of skin, microcalcifications and glandular and adipose tissues. The breast thickness ranged from 2 to 6 cm with glandularities of 25%, 50% and 75%, where microcalcifications with dimensions from 100 up to 600 μm were positioned. In general, results pointed an efficiency index better than 87% for the microcalcification thicknesses and better than 95% for the glandular ratio. The simulations evaluated in this work can be used to optimize the elements from the DE imaging chain, in order to become a complementary tool for the conventional single-exposure images, especially for the visualization and estimation of calcification thicknesses and glandular ratios.

  1. Simulation of decay processes and radiation transport times in radioactivity measurements

    NASA Astrophysics Data System (ADS)

    García-Toraño, E.; Peyres, V.; Bé, M.-M.; Dulieu, C.; Lépy, M.-C.; Salvat, F.

    2017-04-01

    The Fortran subroutine package PENNUC, which simulates random decay pathways of radioactive nuclides, is described. The decay scheme of the active nuclide is obtained from the NUCLEIDE database, whose web application has been complemented with the option of exporting nuclear decay data (possible nuclear transitions, branching ratios, type and energy of emitted particles) in a format that is readable by the simulation subroutines. In the case of beta emitters, the initial energy of the electron or positron is sampled from the theoretical Fermi spectrum. De-excitation of the atomic electron cloud following electron capture and internal conversion is described using transition probabilities from the LLNL Evaluated Atomic Data Library and empirical or calculated energies of released X rays and Auger electrons. The time evolution of radiation showers is determined by considering the lifetimes of nuclear and atomic levels, as well as radiation propagation times. Although PENNUC is designed to operate independently, here it is used in conjunction with the electron-photon transport code PENELOPE, and both together allow the simulation of experiments with radioactive sources in complex material structures consisting of homogeneous bodies limited by quadric surfaces. The reliability of these simulation tools is demonstrated through comparisons of simulated and measured energy spectra from radionuclides with complex multi-gamma spectra, nuclides with metastable levels in their decay pathways, nuclides with two daughters, and beta plus emitters.

  2. Monte Carlo calculated microdosimetric spread for cell nucleus-sized targets exposed to brachytherapy 125I and 192Ir sources and 60Co cell irradiation.

    PubMed

    Villegas, Fernanda; Tilly, Nina; Ahnesjö, Anders

    2013-09-07

    The stochastic nature of ionizing radiation interactions causes a microdosimetric spread in energy depositions for cell or cell nucleus-sized volumes. The magnitude of the spread may be a confounding factor in dose response analysis. The aim of this work is to give values for the microdosimetric spread for a range of doses imparted by (125)I and (192)Ir brachytherapy radionuclides, and for a (60)Co source. An upgraded version of the Monte Carlo code PENELOPE was used to obtain frequency distributions of specific energy for each of these radiation qualities and for four different cell nucleus-sized volumes. The results demonstrate that the magnitude of the microdosimetric spread increases when the target size decreases or when the energy of the radiation quality is reduced. Frequency distributions calculated according to the formalism of Kellerer and Chmelevsky using full convolution of the Monte Carlo calculated single track frequency distributions confirm that at doses exceeding 0.08 Gy for (125)I, 0.1 Gy for (192)Ir, and 0.2 Gy for (60)Co, the resulting distribution can be accurately approximated with a normal distribution. A parameterization of the width of the distribution as a function of dose and target volume of interest is presented as a convenient form for the use in response modelling or similar contexts.

  3. Contrast-enhanced radiotherapy: feasibility and characteristics of the physical absorbed dose distribution for deep-seated tumors

    NASA Astrophysics Data System (ADS)

    Garnica-Garza, H. M.

    2009-09-01

    Radiotherapy using kilovoltage x-rays in conjunction with contrast agents incorporated into the tumor, gold nanoparticles in particular, could represent a potential alternative to current techniques based on high-energy linear accelerators. In this paper, using the voxelized Zubal phantom in conjunction with the Monte Carlo code PENELOPE to model a prostate cancer treatment, it is shown that in combination with a 360° arc delivery technique, tumoricidal doses of radiation can be delivered to deep-seated tumors while still providing acceptable doses to the skin and other organs at risk for gold concentrations in the tumor within the range of 7-10 mg-Au per gram of tissue. Under these conditions and using a x-ray beam with 90% of the fluence within the range of 80-200 keV, a 72 Gy physical absorbed dose to the prostate can be delivered, while keeping the rectal wall, bladder, skin and femoral heads below 65 Gy, 55 Gy, 40 Gy and 30 Gy, respectively. However, it is also shown that non-uniformities in the contrast agent concentration lead to a severe degradation of the dose distribution and that, therefore, techniques to locally quantify the presence of the contrast agent would be necessary in order to determine the incident x-ray fluence that best reproduces the dosimetry obtained under conditions of uniform contrast agent distribution.

  4. An Ancient Transkingdom Horizontal Transfer of Penelope-Like Retroelements from Arthropods to Conifers

    PubMed Central

    Lin, Xuan; Faridi, Nurul; Casola, Claudio

    2016-01-01

    Comparative genomics analyses empowered by the wealth of sequenced genomes have revealed numerous instances of horizontal DNA transfers between distantly related species. In eukaryotes, repetitive DNA sequences known as transposable elements (TEs) are especially prone to move across species boundaries. Such horizontal transposon transfers, or HTTs, are relatively common within major eukaryotic kingdoms, including animals, plants, and fungi, while rarely occurring across these kingdoms. Here, we describe the first case of HTT from animals to plants, involving TEs known as Penelope-like elements, or PLEs, a group of retrotransposons closely related to eukaryotic telomerases. Using a combination of in situ hybridization on chromosomes, polymerase chain reaction experiments, and computational analyses we show that the predominant PLE lineage, EN(+)PLEs, is highly diversified in loblolly pine and other conifers, but appears to be absent in other gymnosperms. Phylogenetic analyses of both protein and DNA sequences reveal that conifers EN(+)PLEs, or Dryads, form a monophyletic group clustering within a clade of primarily arthropod elements. Additionally, no EN(+)PLEs were detected in 1,928 genome assemblies from 1,029 nonmetazoan and nonconifer genomes from 14 major eukaryotic lineages. These findings indicate that Dryads emerged following an ancient horizontal transfer of EN(+)PLEs from arthropods to a common ancestor of conifers approximately 340 Ma. This represents one of the oldest known interspecific transmissions of TEs, and the most conspicuous case of DNA transfer between animals and plants. PMID:27190138

  5. TH-CD-BRA-02: 3D Remote Dosimetry for MRI-Guided Radiation Therapy: A Hybrid Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rankine, L; The University of North Carolina at Chapel Hill, Chapel Hill, NC; Mein, S

    2016-06-15

    Purpose: To validate the dosimetric accuracy of a commercially available MR-IGRT system using a combination of 3D dosimetry measurements (with PRESAGE(R) radiochromic plastic and optical-CT readout) and an in-house developed GPU-accelerated PENELOPE Monte-Carlo dose calculation system. Methods: {sup 60}Co IMRT subject to a 0.35T lateral magnetic field has recently been commissioned in our institution following AAPM’s TG-119 recommendations. We performed PRESAGE(R) sensitivity studies in 4ml cuvettes to verify linearity, MR-compatibility, and energy-independence. Using 10cm diameter PRESAGE(R), we delivered an open calibration field to examine the percent depth dose and a symmetrical 3-field plan with three adjacent regions of varying dosemore » to determine uniformity within the dosimeter under a magnetic field. After initial testing, TG-119 plans were created in the TPS and then delivered to 14.5cm 2kg PRESAGE(R) dosimeters. Dose readout was performed via optical-CT at a second institution specializing in remote 3D dosimetry. Absolute dose was measured using an IBA CC01 ion chamber and the institution standard patient-specific QA methods were used to validate plan delivery. Calculated TG-119 plans were then compared with an independent Monte Carlo dose calculation (gPENELOPE). Results: PRESAGE(R) responds linearly (R{sup 2}=0.9996) to {sup 60}Co irradiation, in the presence of a 0.35T magnetic field, with a sensitivity of 0.0305(±0.003)cm{sup −1}Gy{sup −1}, within 1% of a 6MV non-MR linac irradiation (R{sup 2}=0.9991) with a sensitivity of 0.0302(±0.003)cm{sup −1}Gy{sup −1}. Analysis of TG-119 clinical plans using 3D-gamma (3%/3mm, 10% threshold) give passing rates of: HN 99.1%, prostate 98.0%, C-shape 90.8%, and multi-target 98.5%. The TPS agreed with gPENELOPE with a mean gamma passing rate of 98.4±1.5% (2%/2mm) with the z-score distributions following a standard normal distribution. Conclusion: We demonstrate for the first time that 3D remote dosimetry using both experimental and computational methods is a feasible and reliable approach to commissioning MR-IMRT, which is particularly useful for less specialized clinics in adopting this new treatment modality.« less

  6. Absorbed dose evaluation of Auger electron-emitting radionuclides: impact of input decay spectra on dose point kernels and S-values

    NASA Astrophysics Data System (ADS)

    Falzone, Nadia; Lee, Boon Q.; Fernández-Varea, José M.; Kartsonaki, Christiana; Stuchbery, Andrew E.; Kibédi, Tibor; Vallis, Katherine A.

    2017-03-01

    The aim of this study was to investigate the impact of decay data provided by the newly developed stochastic atomic relaxation model BrIccEmis on dose point kernels (DPKs - radial dose distribution around a unit point source) and S-values (absorbed dose per unit cumulated activity) of 14 Auger electron (AE) emitting radionuclides, namely 67Ga, 80mBr, 89Zr, 90Nb, 99mTc, 111In, 117mSn, 119Sb, 123I, 124I, 125I, 135La, 195mPt and 201Tl. Radiation spectra were based on the nuclear decay data from the medical internal radiation dose (MIRD) RADTABS program and the BrIccEmis code, assuming both an isolated-atom and condensed-phase approach. DPKs were simulated with the PENELOPE Monte Carlo (MC) code using event-by-event electron and photon transport. S-values for concentric spherical cells of various sizes were derived from these DPKs using appropriate geometric reduction factors. The number of Auger and Coster-Kronig (CK) electrons and x-ray photons released per nuclear decay (yield) from MIRD-RADTABS were consistently higher than those calculated using BrIccEmis. DPKs for the electron spectra from BrIccEmis were considerably different from MIRD-RADTABS in the first few hundred nanometres from a point source where most of the Auger electrons are stopped. S-values were, however, not significantly impacted as the differences in DPKs in the sub-micrometre dimension were quickly diminished in larger dimensions. Overestimation in the total AE energy output by MIRD-RADTABS leads to higher predicted energy deposition by AE emitting radionuclides, especially in the immediate vicinity of the decaying radionuclides. This should be taken into account when MIRD-RADTABS data are used to simulate biological damage at nanoscale dimensions.

  7. LTR-Retrotransposons from Bdelloid Rotifers Capture Additional ORFs Shared between Highly Diverse Retroelement Types.

    PubMed

    Rodriguez, Fernando; Kenefick, Aubrey W; Arkhipova, Irina R

    2017-04-11

    Rotifers of the class Bdelloidea, microscopic freshwater invertebrates, possess a highlydiversified repertoire of transposon families, which, however, occupy less than 4% of genomic DNA in the sequenced representative Adineta vaga . We performed a comprehensive analysis of A. vaga retroelements, and found that bdelloid long terminal repeat (LTR)retrotransposons, in addition to conserved open reading frame (ORF) 1 and ORF2 corresponding to gag and pol genes, code for an unusually high variety of ORF3 sequences. Retrovirus-like LTR families in A. vaga belong to four major lineages, three of which are rotiferspecific and encode a dUTPase domain. However only one lineage contains a canonical env like fusion glycoprotein acquired from paramyxoviruses (non-segmented negative-strand RNA viruses), although smaller ORFs with transmembrane domains may perform similar roles. A different ORF3 type encodes a GDSL esterase/lipase, which was previously identified as ORF1 in several clades of non-LTR retrotransposons, and implicated in membrane targeting. Yet another ORF3 type appears in unrelated LTR-retrotransposon lineages, and displays strong homology to DEDDy-type exonucleases involved in 3'-end processing of RNA and single-stranded DNA. Unexpectedly, each of the enzymatic ORF3s is also associated with different subsets of Penelope -like Athena retroelement families. The unusual association of the same ORF types with retroelements from different classes reflects their modular structure with a high degree of flexibility, and points to gene sharing between different groups of retroelements.

  8. A methodology for efficiency optimization of betavoltaic cell design using an isotropic planar source having an energy dependent beta particle distribution.

    PubMed

    Theirrattanakul, Sirichai; Prelas, Mark

    2017-09-01

    Nuclear batteries based on silicon carbide betavoltaic cells have been studied extensively in the literature. This paper describes an analysis of design parameters, which can be applied to a variety of materials, but is specific to silicon carbide. In order to optimize the interface between a beta source and silicon carbide p-n junction, it is important to account for the specific isotope, angular distribution of the beta particles from the source, the energy distribution of the source as well as the geometrical aspects of the interface between the source and the transducer. In this work, both the angular distribution and energy distribution of the beta particles are modeled using a thin planar beta source (e.g., H-3, Ni-63, S-35, Pm-147, Sr-90, and Y-90) with GEANT4. Previous studies of betavoltaics with various source isotopes have shown that Monte Carlo based codes such as MCNPX, GEANT4 and Penelope generate similar results. GEANT4 is chosen because it has important strengths for the treatment of electron energies below one keV and it is widely available. The model demonstrates the effects of angular distribution, the maximum energy of the beta particle and energy distribution of the beta source on the betavoltaic and it is useful in determining the spatial profile of the power deposition in the cell. Copyright © 2017. Published by Elsevier Ltd.

  9. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  10. Hazardous Materials Management System. A Guide for Local Emergency Managers.

    DTIC Science & Technology

    1981-07-01

    UDE OIL OCA EM~GENY ~A~iGRS𔄀. PERFORMING ORG. REPORT NUMBER F O . . . S . C O N T R A T l C R .Y N U BE R ) (i MYRAJ/LEE a&PENELOPE G./ROE *OCP41-79...out of, through, and within the area. Estimate the frequencies of shipments and the types and quanti ti es of ma teri al s i nvol ved. F (NOTE: Rer...reactions 4. Double- replacement reactions S . Oxidation- reduction reactions - Page 44 - B. Rate of Chemical Reactions I. Nature of material 2. Subdivision of

  11. Photometric geodesy of main-belt asteroids. III - Additional lightcurves

    NASA Technical Reports Server (NTRS)

    Weidenschilling, S. J.; Chapman, C. R.; Davis, D. R.; Greenberg, R.; Levy, D. H.

    1990-01-01

    A total of 107 complete or partial lightcurves are presented for 59 different asteroids over the 1982-1989 period. Unusual lightcurves with unequal minima and maxima at large amplitudes are preferentially seen for M-type asteroids. Some asteroids, such as 16 Psyche and 201 Penelope, exhibit lightcurves combining large amplitude with very unequal brightness for both maxima and both minima, even at small phase angles. An M-type asteroid is believed to consist of a metal core of a differentiated parent body that has had its rocky mantle completely removed by one or more large impacts.

  12. Photometric geodesy of main-belt asteroids. III. Additional lightcurves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weidenschilling, S.J.; Chapman, C.R.; Davis, D.R.

    1990-08-01

    A total of 107 complete or partial lightcurves are presented for 59 different asteroids over the 1982-1989 period. Unusual lightcurves with unequal minima and maxima at large amplitudes are preferentially seen for M-type asteroids. Some asteroids, such as 16 Psyche and 201 Penelope, exhibit lightcurves combining large amplitude with very unequal brightness for both maxima and both minima, even at small phase angles. An M-type asteroid is believed to consist of a metal core of a differentiated parent body that has had its rocky mantle completely removed by one or more large impacts. 39 refs.

  13. Geant4-DNA track-structure simulations for gold nanoparticles: The importance of electron discrete models in nanometer volumes.

    PubMed

    Sakata, Dousatsu; Kyriakou, Ioanna; Okada, Shogo; Tran, Hoang N; Lampe, Nathanael; Guatelli, Susanna; Bordage, Marie-Claude; Ivanchenko, Vladimir; Murakami, Koichi; Sasaki, Takashi; Emfietzoglou, Dimitris; Incerti, Sebastien

    2018-05-01

    Gold nanoparticles (GNPs) are known to enhance the absorbed dose in their vicinity following photon-based irradiation. To investigate the therapeutic effectiveness of GNPs, previous Monte Carlo simulation studies have explored GNP dose enhancement using mostly condensed-history models. However, in general, such models are suitable for macroscopic volumes and for electron energies above a few hundred electron volts. We have recently developed, for the Geant4-DNA extension of the Geant4 Monte Carlo simulation toolkit, discrete physics models for electron transport in gold which include the description of the full atomic de-excitation cascade. These models allow event-by-event simulation of electron tracks in gold down to 10 eV. The present work describes how such specialized physics models impact simulation-based studies on GNP-radioenhancement in a context of x-ray radiotherapy. The new discrete physics models are compared to the Geant4 Penelope and Livermore condensed-history models, which are being widely used for simulation-based NP radioenhancement studies. An ad hoc Geant4 simulation application has been developed to calculate the absorbed dose in liquid water around a GNP and its radioenhancement, caused by secondary particles emitted from the GNP itself, when irradiated with a monoenergetic electron beam. The effect of the new physics models is also quantified in the calculation of secondary particle spectra, when originating in the GNP and when exiting from it. The new physics models show similar backscattering coefficients with the existing Geant4 Livermore and Penelope models in large volumes for 100 keV incident electrons. However, in submicron sized volumes, only the discrete models describe the high backscattering that should still be present around GNPs at these length scales. Sizeable differences (mostly above a factor of 2) are also found in the radial distribution of absorbed dose and secondary particles between the new and the existing Geant4 models. The degree to which these differences are due to intrinsic limitations of the condensed-history models or to differences in the underling scattering cross sections requires further investigation. Improved physics models for gold are necessary to better model the impact of GNPs in radiotherapy via Monte Carlo simulations. We implemented discrete electron transport models for gold in Geant4 that is applicable down to 10 eV including the modeling of the full de-excitation cascade. It is demonstrated that the new model has a significant positive impact on particle transport simulations in gold volumes with submicron dimensions compared to the existing Livermore and Penelope condensed-history models of Geant4. © 2018 American Association of Physicists in Medicine.

  14. MCNP-based computational model for the Leksell gamma knife.

    PubMed

    Trnka, Jiri; Novotny, Josef; Kluson, Jaroslav

    2007-01-01

    We have focused on the usage of MCNP code for calculation of Gamma Knife radiation field parameters with a homogenous polystyrene phantom. We have investigated several parameters of the Leksell Gamma Knife radiation field and compared the results with other studies based on EGS4 and PENELOPE code as well as the Leksell Gamma Knife treatment planning system Leksell GammaPlan (LGP). The current model describes all 201 radiation beams together and simulates all the sources in the same time. Within each beam, it considers the technical construction of the source, the source holder, collimator system, the spherical phantom, and surrounding material. We have calculated output factors for various sizes of scoring volumes, relative dose distributions along basic planes including linear dose profiles, integral doses in various volumes, and differential dose volume histograms. All the parameters have been calculated for each collimator size and for the isocentric configuration of the phantom. We have found the calculated output factors to be in agreement with other authors' works except the case of 4 mm collimator size, where averaging over the scoring volume and statistical uncertainties strongly influences the calculated results. In general, all the results are dependent on the choice of the scoring volume. The calculated linear dose profiles and relative dose distributions also match independent studies and the Leksell GammaPlan, but care must be taken about the fluctuations within the plateau, which can influence the normalization, and accuracy in determining the isocenter position, which is important for comparing different dose profiles. The calculated differential dose volume histograms and integral doses have been compared with data provided by the Leksell GammaPlan. The dose volume histograms are in good agreement as well as integral doses calculated in small calculation matrix volumes. However, deviations in integral doses up to 50% can be observed for large volumes such as for the total skull volume. The differences observed in treatment of scattered radiation between the MC method and the LGP may be important in this case. We have also studied the influence of differential direction sampling of primary photons and have found that, due to the anisotropic sampling, doses around the isocenter deviate from each other by up to 6%. With caution about the details of the calculation settings, it is possible to employ the MCNP Monte Carlo code for independent verification of the Leksell Gamma Knife radiation field properties.

  15. Comparison of clinical parameters in captive Cracidae fed traditional and extruded diets.

    PubMed

    Candido, Marcus Vinicius; Silva, Louise C C; Moura, Joelma; Bona, Tania D M M; Montiani-Ferreira, Fabiano; Santin, Elizabeth

    2011-09-01

    The Cracidae family of neotropical birds is regarded as one of the most severely threatened in the world. They traditionally have been extensively hunted, and, thus, ex situ efforts for their conservation are recommended and involve the optimization of their care in captivity. Nutrition is a fundamental aspect of husbandry, which influences survival and reproduction in captivity. In this study, a total of 29 animals, including 3 species (Penelope obscura, Penelope superciliaris, and Aburria jacutinga), were subjected to monthly physical examination and blood sampling before and after dietary conversion from the traditional diet of broiler feed, fruits, and vegetables to a nutritionally balanced commercial diet specifically designed for wild Galliformes. The diet change produced differences in several parameters tested, including an increase (P < 0.05) in hemoglobin concentration for all species. Increases (P < 0.05) in erythrocyte count, packed cell volume, and body weight were observed in P. obscura, with a concomitant decrease in the standard deviation for such parameters that show improved uniformity. Globulins and lipase also were reduced (P < 0.05) in P. obscura. Although leukocyte count was lowered and eosinophils were increased in all 3 species after dietary conversion, only these 2 changes were significant (P < 0.05) in P. superciliaris. A. jacutinga had higher (P < 0.05) blood glucose concentrations than the other species, but diet had no effect on this parameter. Blood uric acid concentrations were higher (P < 0.05) after conversion to the commercial diet in P superciliaris. The provision of a commercial extruded diet as a single food source was beneficial, which led to a general improvement in clinical aspects and group uniformity in these 3 species of Cracidae.

  16. Angular dependence of Kβ/Kα intensity ratios of thick Ti and Cu pure elements from 10-25 keV electron bombardment

    NASA Astrophysics Data System (ADS)

    Singh, B.; Kumar, S.; Prajapati, S.; Singh, B. K.; Llovet, X.; Shanker, R.

    2018-02-01

    Measurements yielding the first results on angular dependence of Kβ/Kα X-ray intensity ratios of thick Ti (Z = 22) and Cu (Z = 29) targets induced by 10-25 keV electrons are presented. The measurements were done by rotating the target surface around the electron beam direction in the angular detection range 105° ≤ θ ≤ 165° in the reflection mode using an energy dispersive Si PIN photodiode detector. The measured angular dependence of Kβ/Kα intensity ratios is shown to be almost isotropic for Ti and Cu targets for the range of detection angles, 105° ≤ θ ≤ 150°, while there is a very weak increase beyond 150° for both targets. No dependence of Kβ/Kα intensity ratios on impact energy is observed; while on average, the value of the Kβ/Kα X-ray intensity ratio for Cu is larger by about 8% than that for Ti, which indicates a weak Z-dependence of the target. The experimental results are compared with those obtained from PENELOPE MC calculations and from the Evaluated Atomic Data Library (EADL) ratios. These results on Kβ/Kα X-ray intensity ratios are found to be in reasonable agreement in the detection angle range 105° ≤ θ ≤ 150° to within uncertainties, whereas the simulation and experimental results show a very slight increase in the intensity ratio with θ as the latter attains higher values. The results presented in this work provide a direct check on the accuracy of PENELOPE at oblique incidence angles for which there has been a lack of measurements in the literature until now.

  17. An Ancient Transkingdom Horizontal Transfer of Penelope-Like Retroelements from Arthropods to Conifers.

    PubMed

    Lin, Xuan; Faridi, Nurul; Casola, Claudio

    2016-05-02

    Comparative genomics analyses empowered by the wealth of sequenced genomes have revealed numerous instances of horizontal DNA transfers between distantly related species. In eukaryotes, repetitive DNA sequences known as transposable elements (TEs) are especially prone to move across species boundaries. Such horizontal transposon transfers, or HTTs, are relatively common within major eukaryotic kingdoms, including animals, plants, and fungi, while rarely occurring across these kingdoms. Here, we describe the first case of HTT from animals to plants, involving TEs known as Penelope-like elements, or PLEs, a group of retrotransposons closely related to eukaryotic telomerases. Using a combination of in situ hybridization on chromosomes, polymerase chain reaction experiments, and computational analyses we show that the predominant PLE lineage, EN(+)PLEs, is highly diversified in loblolly pine and other conifers, but appears to be absent in other gymnosperms. Phylogenetic analyses of both protein and DNA sequences reveal that conifers EN(+)PLEs, or Dryads, form a monophyletic group clustering within a clade of primarily arthropod elements. Additionally, no EN(+)PLEs were detected in 1,928 genome assemblies from 1,029 nonmetazoan and nonconifer genomes from 14 major eukaryotic lineages. These findings indicate that Dryads emerged following an ancient horizontal transfer of EN(+)PLEs from arthropods to a common ancestor of conifers approximately 340 Ma. This represents one of the oldest known interspecific transmissions of TEs, and the most conspicuous case of DNA transfer between animals and plants. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution 2016. This work is written by US Government employees and is in the public domain in the US.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koger, B; Kirkby, C; Dept. of Oncology, Dept. Of Medical Physics, Jack Ady Cancer Centre, Lethbridge, Alberta

    Introduction: The use of gold nanoparticles (GNPs) in radiotherapy has shown promise for therapeutic enhancement. In this study, we explore the feasibility of enhancing radiotherapy with GNPs in an arc-therapy context. We use Monte Carlo simulations to quantify the macroscopic dose-enhancement ratio (DER) and tumour to normal tissue ratio (TNTR) as functions of photon energy over various tumour and body geometries. Methods: GNP-enhanced arc radiotherapy (GEART) was simulated using the PENELOPE Monte Carlo code and penEasy main program. We simulated 360° arc-therapy with monoenergetic photon energies 50 – 1000 keV and several clinical spectra used to treat a spherical tumourmore » containing uniformly distributed GNPs in a cylindrical tissue phantom. Various geometries were used to simulate different tumour sizes and depths. Voxel dose was used to calculate DERs and TNTRs. Inhomogeneity effects were examined through skull dose in brain tumour treatment simulations. Results: Below 100 keV, DERs greater than 2.0 were observed. Compared to 6 MV, tumour dose at low energies was more conformai, with lower normal tissue dose and higher TNTRs. Both the DER and TNTR increased with increasing cylinder radius and decreasing tumour radius. The inclusion of bone showed excellent tumour conformality at low energies, though with an increase in skull dose (40% of tumour dose with 100 keV compared to 25% with 6 MV). Conclusions: Even in the presence of inhomogeneities, our results show promise for the treatment of deep-seated tumours with low-energy GEART, with greater tumour dose conformality and lower normal tissue dose than 6 MV.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brualla, Lorenzo, E-mail: lorenzo.brualla@uni-due.de; Zaragoza, Francisco J.; Sempau, Josep

    Purpose: External beam radiotherapy is the only conservative curative approach for Stage I non-Hodgkin lymphomas of the conjunctiva. The target volume is geometrically complex because it includes the eyeball and lid conjunctiva. Furthermore, the target volume is adjacent to radiosensitive structures, including the lens, lacrimal glands, cornea, retina, and papilla. The radiotherapy planning and optimization requires accurate calculation of the dose in these anatomical structures that are much smaller than the structures traditionally considered in radiotherapy. Neither conventional treatment planning systems nor dosimetric measurements can reliably determine the dose distribution in these small irradiated volumes. Methods and Materials: The Montemore » Carlo simulations of a Varian Clinac 2100 C/D and human eye were performed using the PENELOPE and PENEASYLINAC codes. Dose distributions and dose volume histograms were calculated for the bulbar conjunctiva, cornea, lens, retina, papilla, lacrimal gland, and anterior and posterior hemispheres. Results: The simulated results allow choosing the most adequate treatment setup configuration, which is an electron beam energy of 6 MeV with additional bolus and collimation by a cerrobend block with a central cylindrical hole of 3.0 cm diameter and central cylindrical rod of 1.0 cm diameter. Conclusions: Monte Carlo simulation is a useful method to calculate the minute dose distribution in ocular tissue and to optimize the electron irradiation technique in highly critical structures. Using a voxelized eye phantom based on patient computed tomography images, the dose distribution can be estimated with a standard statistical uncertainty of less than 2.4% in 3 min using a computing cluster with 30 cores, which makes this planning technique clinically relevant.« less

  20. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 02: Dosimetric effects of gold nanoparticle surface coatings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koger, Brandon; Kirkby, Charles

    2016-08-15

    Introduction: Gold nanoparticles (GNPs) can enhance radiation therapy within a tumour, increasing local energy deposition under irradiation, but experimental evidence suggests the enhancement is not as large as predicted by dose enhancement alone. Many studies neglect to account for surface coatings that are frequently used to optimize GNP uptake and biological distribution. This study uses Monte Carlo methods to investigate the consequences on local dose enhancement due to including these surface coatings. Methods: Using the PENELOPE Monte Carlo code system, GNP irradiation was simulated both with and without surface coatings of polyethylene glycol (PEG) of various molecular weights. Dose wasmore » scored to the gold, coating, and surrounding water, and the dosimetric differences between these scenarios were examined. Results: The simulated PEG coating absorbs a large portion of the energy that would otherwise be deposited in the medium. The mean dose to water was reduced by up to 2.5, 3.5, and 4.5% for GNPs of diameters 50, 20, and 10 nm, respectively. This effect was more pronounced for smaller GNPs, thicker coatings, and low photon source energies where the enhancement due to GNPs is the greatest. The molecular weight of the coating material did not have a significant impact on the dose. Conclusions: The inclusion of a coating material in GNP enhanced radiation may reduce the dose enhancement due to the nanoparticles. Both the composition and size of the coating play a role in the level of this reduction and should be considered carefully.« less

  1. Rotation parameters and shapes of 15 asteroids

    NASA Astrophysics Data System (ADS)

    Tungalag, N.; Shevchenko, V. G.; Lupishko, D. F.

    2002-12-01

    With the use of the combined method (the amplitude and magnitude method plus the epoch method) pole coordinates, sidereal rotation periods, and axial ratios of triaxial ellipsoid figures for asteroids 22 Kalliope, 75 Eurydike, 93 Minerva, 97 Klotho, 105 Artemis, 113 Amalthea, 119 Althaea, 201 Penelope, 270 Anahita, 338 Budrosa, 487 Venetia, 674 Rachele, 776 Berbericia, 887 Alinda, nd 951 Gaspra were determined. For eight of them (asteroids 75, 97, 105, 113, 119, 338, 674, and 887) these values were obtained for the first time. We used the numerical photometric asteroid model based on ellipsoidal asteroid shape, homogeneous albedo distribution over the surface, and Akimov's scattering law.

  2. Bowtie filters for dedicated breast CT: Analysis of bowtie filter material selection.

    PubMed

    Kontson, Kimberly; Jennings, Robert J

    2015-09-01

    For a given bowtie filter design, both the selection of material and the physical design control the energy fluence, and consequently the dose distribution, in the object. Using three previously described bowtie filter designs, the goal of this work is to demonstrate the effect that different materials have on the bowtie filter performance measures. Three bowtie filter designs that compensate for one or more aspects of the beam-modifying effects due to the differences in path length in a projection have been designed. The nature of the designs allows for their realization using a variety of materials. The designs were based on a phantom, 14 cm in diameter, composed of 40% fibroglandular and 60% adipose tissue. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis-material decomposition to produce the same spectral shape and intensity at the detector, using two different materials. With bowtie design #3, it is possible to eliminate the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. Seven different materials were chosen to represent a range of chemical compositions and densities. After calculation of construction parameters for each bowtie filter design, a bowtie filter was created using each of these materials (assuming reasonable construction parameters were obtained), resulting in a total of 26 bowtie filters modeled analytically and in the penelope Monte Carlo simulation environment. Using the analytical model of each bowtie filter, design profiles were obtained and energy fluence as a function of fan-angle was calculated. Projection images with and without each bowtie filter design were also generated using penelope and reconstructed using FBP. Parameters such as dose distribution, noise uniformity, and scatter were investigated. Analytical calculations with and without each bowtie filter show that some materials for a given design produce bowtie filters that are too large for implementation in breast CT scanners or too small to accurately manufacture. Results also demonstrate the ability to manipulate the energy fluence distribution (dynamic range) by using different materials, or different combinations of materials, for a given bowtie filter design. This feature is especially advantageous when using photon counting detector technology. Monte Carlo simulation results from penelope show that all studied material choices for bowtie design #2 achieve nearly uniform dose distribution, noise uniformity index less than 5%, and nearly uniform scatter-to-primary ratio. These same features can also be obtained using certain materials with bowtie designs #1 and #3. With the three bowtie filter designs used in this work, the selection of material is an important design consideration. An appropriate material choice can improve image quality, dose uniformity, and dynamic range.

  3. Bowtie filters for dedicated breast CT: Analysis of bowtie filter material selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontson, Kimberly, E-mail: Kimberly.Kontson@fda.hhs.gov; Jennings, Robert J.

    Purpose: For a given bowtie filter design, both the selection of material and the physical design control the energy fluence, and consequently the dose distribution, in the object. Using three previously described bowtie filter designs, the goal of this work is to demonstrate the effect that different materials have on the bowtie filter performance measures. Methods: Three bowtie filter designs that compensate for one or more aspects of the beam-modifying effects due to the differences in path length in a projection have been designed. The nature of the designs allows for their realization using a variety of materials. The designsmore » were based on a phantom, 14 cm in diameter, composed of 40% fibroglandular and 60% adipose tissue. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis-material decomposition to produce the same spectral shape and intensity at the detector, using two different materials. With bowtie design #3, it is possible to eliminate the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. Seven different materials were chosen to represent a range of chemical compositions and densities. After calculation of construction parameters for each bowtie filter design, a bowtie filter was created using each of these materials (assuming reasonable construction parameters were obtained), resulting in a total of 26 bowtie filters modeled analytically and in the PENELOPE Monte Carlo simulation environment. Using the analytical model of each bowtie filter, design profiles were obtained and energy fluence as a function of fan-angle was calculated. Projection images with and without each bowtie filter design were also generated using PENELOPE and reconstructed using FBP. Parameters such as dose distribution, noise uniformity, and scatter were investigated. Results: Analytical calculations with and without each bowtie filter show that some materials for a given design produce bowtie filters that are too large for implementation in breast CT scanners or too small to accurately manufacture. Results also demonstrate the ability to manipulate the energy fluence distribution (dynamic range) by using different materials, or different combinations of materials, for a given bowtie filter design. This feature is especially advantageous when using photon counting detector technology. Monte Carlo simulation results from PENELOPE show that all studied material choices for bowtie design #2 achieve nearly uniform dose distribution, noise uniformity index less than 5%, and nearly uniform scatter-to-primary ratio. These same features can also be obtained using certain materials with bowtie designs #1 and #3. Conclusions: With the three bowtie filter designs used in this work, the selection of material is an important design consideration. An appropriate material choice can improve image quality, dose uniformity, and dynamic range.« less

  4. A Comparison of Experimental EPMA Data and Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2004-01-01

    Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.

  5. Model-Based Radiation Dose Correction for Yttrium-90 Microsphere Treatment of Liver Tumors With Central Necrosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ching-Sheng; Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan; Lin, Ko-Han

    Purpose: The objectives of this study were to model and calculate the absorbed fraction {phi} of energy emitted from yttrium-90 ({sup 90}Y) microsphere treatment of necrotic liver tumors. Methods and Materials: The tumor necrosis model was proposed for the calculation of {phi} over the spherical shell region. Two approaches, the semianalytic method and the probabilistic method, were adopted. In the former method, the range--energy relationship and the sampling of electron paths were applied to calculate the energy deposition within the target region, using the straight-ahead and continuous-slowing-down approximation (CSDA) method. In the latter method, the Monte Carlo PENELOPE code wasmore » used to verify results from the first method. Results: The fraction of energy, {phi}, absorbed from {sup 90}Y by 1-cm thickness of tumor shell from microsphere distribution by CSDA with complete beta spectrum was 0.832 {+-} 0.001 and 0.833 {+-} 0.001 for smaller (r{sub T} = 5 cm) and larger (r{sub T} = 10 cm) tumors (where r is the radii of the tumor [T] and necrosis [N]). The fraction absorbed depended mainly on the thickness of the tumor necrosis configuration, rather than on tumor necrosis size. The maximal absorbed fraction {phi} that occurred in tumors without central necrosis for each size of tumor was different: 0.950 {+-} 0.000, and 0.975 {+-} 0.000 for smaller (r{sub T} = 5 cm) and larger (r{sub T} = 10 cm) tumors, respectively (p < 0.0001). Conclusions: The tumor necrosis model was developed for dose calculation of {sup 90}Y microsphere treatment of hepatic tumors with central necrosis. With this model, important information is provided regarding the absorbed fraction applicable to clinical {sup 90}Y microsphere treatment.« less

  6. Monte Carlo simulations used to calculate the energy deposited in the coronary artery lumen as a function of iodine concentration and photon energy.

    PubMed

    Hocine, Nora; Meignan, Michel; Masset, Hélène

    2018-04-01

    To better understand the risks of cumulative medical X-ray investigations and the possible causal role of contrast agent on the coronary artery wall, the correlation between iodinated contrast media and the increase of energy deposited in the coronary artery lumen as a function of iodine concentration and photon energy is investigated. The calculations of energy deposition have been performed using Monte Carlo (MC) simulation codes, namely PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and Monte Carlo N-Particle eXtended (MCNPX). Exposure of a cylinder phantom, artery and a metal stent (AISI 316L) to several X-ray photon beams were simulated. For the energies used in cardiac imaging the energy deposited in the coronary artery lumen increases with the quantity of iodine. Monte Carlo calculations indicate a strong dependence of the energy enhancement factor (EEF) on photon energy and iodine concentration. The maximum value of EEF is equal to 25; this factor is showed for 83 keV and for 400 mg Iodine/mL. No significant impact of the stent is observed on the absorbed dose in the artery for incident X-ray beams with mean energies of 44, 48, 52 and 55 keV. A strong correlation was shown between the increase in the concentration of iodine and the energy deposited in the coronary artery lumen for the energies used in cardiac imaging and over the energy range between 44 and 55 keV. The data provided by this study could be useful for creating new medical imaging protocols to obtain better diagnostic information with a lower level of radiation exposure.

  7. Evaluation of PeneloPET Simulations of Biograph PET/CT Scanners

    NASA Astrophysics Data System (ADS)

    Abushab, K. M.; Herraiz, J. L.; Vicente, E.; Cal-González, J.; España, S.; Vaquero, J. J.; Jakoby, B. W.; Udías, J. M.

    2016-06-01

    Monte Carlo (MC) simulations are widely used in positron emission tomography (PET) for optimizing detector design, acquisition protocols, and evaluating corrections and reconstruction methods. PeneloPET is a MC code based on PENELOPE, for PET simulations which considers detector geometry, acquisition electronics and materials, and source definitions. While PeneloPET has been successfully employed and validated with small animal PET scanners, it required a proper validation with clinical PET scanners including time-of-flight (TOF) information. For this purpose, we chose the family of Biograph PET/CT scanners: the Biograph True-Point (B-TP), Biograph True-Point with TrueV (B-TPTV) and the Biograph mCT. They have similar block detectors and electronics, but a different number of rings and configuration. Some effective parameters of the simulations, such as the dead-time and the size of the reflectors in the detectors, were adjusted to reproduce the sensitivity and noise equivalent count (NEC) rate of the B-TPTV scanner. These parameters were then used to make predictions of experimental results such as sensitivity, NEC rate, spatial resolution, and scatter fraction (SF), from all the Biograph scanners and some variations of them (energy windows and additional rings of detectors). Predictions agree with the measured values for the three scanners, within 7% (sensitivity and NEC rate) and 5% (SF). The resolution obtained for the B-TPTV is slightly better (10%) than the experimental values. In conclusion, we have shown that PeneloPET is suitable for simulating and investigating clinical systems with good accuracy and short computational time, though some effort tuning of a few parameters of the scanners modeled may be needed in case that the full details of the scanners studied are not available.

  8. Improved outcome of 131I-mIBG treatment through combination with external beam radiotherapy in the SK-N-SH mouse model of neuroblastoma.

    PubMed

    Corroyer-Dulmont, Aurélien; Falzone, Nadia; Kersemans, Veerle; Thompson, James; Allen, Danny P; Able, Sarah; Kartsonaki, Christiana; Malcolm, Javian; Kinchesh, Paul; Hill, Mark A; Vojnovic, Boris; Smart, Sean C; Gaze, Mark N; Vallis, Katherine A

    2017-09-01

    To assess the efficacy of different schedules for combining external beam radiotherapy (EBRT) with molecular radiotherapy (MRT) using 131 I-mIBG in the management of neuroblastoma. BALB/c nu/nu mice bearing SK-N-SH neuroblastoma xenografts were assigned to five treatment groups: 131 I-mIBG 24h after EBRT, EBRT 6days after 131 I-mIBG, EBRT alone, 131 I-mIBG alone and control (untreated). A total of 56 mice were assigned to 3 studies. Study 1: Vessel permeability was evaluated using dynamic contrast-enhanced (DCE)-MRI (n=3). Study 2: Tumour uptake of 131 I-mIBG in excised lesions was evaluated by γ-counting and autoradiography (n=28). Study 3: Tumour volume was assessed by longitudinal MR imaging and survival was analysed (n=25). Tumour dosimetry was performed using Monte Carlo simulations of absorbed fractions with the radiation transport code PENELOPE. Given alone, both 131 I-mIBG and EBRT resulted in a seven-day delay in tumour regrowth. Following EBRT, vessel permeability was evaluated by DCE-MRI and showed an increase at 24h post irradiation that correlated with an increase in 131 I-mIBG tumour uptake, absorbed dose and overall survival in the case of combined treatment. Similarly, EBRT administered seven days after MRT to coincide with tumour regrowth, significantly decreased the tumour volume and increased overall survival. This study demonstrates that combining EBRT and MRT has an enhanced therapeutic effect and emphasizes the importance of treatment scheduling according to pathophysiological criteria such as tumour vessel permeability and tumour growth kinetics. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Assessment of cell death mechanisms triggered by 177Lu-anti-CD20 in lymphoma cells.

    PubMed

    Azorín-Vega, E; Rojas-Calderón, E; Martínez-Ventura, B; Ramos-Bernal, J; Serrano-Espinoza, L; Jiménez-Mancilla, N; Ordaz-Rosado, D; Ferro-Flores, G

    2018-08-01

    The aim of this research was to evaluate the cell cycle redistribution and activation of early and late apoptotic pathways in lymphoma cells after treatment with 177 Lu-anti-CD20. Experimental and computer models were used to calculate the radiation absorbed dose to cancer cell nuclei. The computer model (Monte Carlo, PENELOPE) consisted of twenty spheres representing cells with an inner sphere (cell nucleus) embedded in culture media. Radiation emissions of the radiopharmaceutical located in cell membranes and in culture media were considered for nuclei dose calculations. Flow cytometric analyses demonstrated that doses as low as 4.8Gy are enough to induce cell cycle arrest and activate late apoptotic pathways. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Van Gogh and lithium. Creativity and bipolar disorder: perspective of a writer.

    PubMed

    Rowe, P

    1999-12-01

    Penelope Rowe, educated at Sydney University in the 1960s and the mother of three daughters, is the author of three published novels and two collections of short stories. She agrees with Graham Greene that 'the creative writer perceives his world once and for all in childhood and adolescence and his whole career is an effort to illustrate his private world in terms of the great public world we all share. In the childhood of Judas, Jesus was betrayed.' More optimistically she also agrees with the great poet Seamus Heaney, who says 'there is the responsibility of the writer to address, amplify and analyse the music of what happens, and also the other music, the "siren music", of what might be, that is "the crediting of marvels" '.

  11. Double-Blind, Placebo-Controlled, Randomized Phase III Trial Evaluating Pertuzumab Combined With Chemotherapy for Low Tumor Human Epidermal Growth Factor Receptor 3 mRNA-Expressing Platinum-Resistant Ovarian Cancer (PENELOPE).

    PubMed

    Kurzeder, Christian; Bover, Isabel; Marmé, Frederik; Rau, Joern; Pautier, Patricia; Colombo, Nicoletta; Lorusso, Domenica; Ottevanger, Petronella; Bjurberg, Maria; Marth, Christian; Barretina-Ginesta, Pilar; Vergote, Ignace; Floquet, Anne; Del Campo, Josep M; Mahner, Sven; Bastière-Truchot, Lydie; Martin, Nicolas; Oestergaard, Mikkel Z; Kiermaier, Astrid; Schade-Brittinger, Carmen; Polleis, Sandra; du Bois, Andreas; Gonzalez-Martin, Antonio

    2016-07-20

    The AGO-OVAR 2.29/ENGOT-ov14/PENELOPE prospectively randomized phase III trial evaluated the addition of pertuzumab to chemotherapy in patients with platinum-resistant ovarian carcinoma with low tumor human epidermal growth factor receptor 3 (HER3) mRNA expression. We report the results of the primary efficacy analysis. Eligible patients had ovarian carcinoma that progressed during or within 6 months of completing four or more platinum cycles, centrally tested low tumor HER3 mRNA expression (concentration ratio ≤ 2.81 by quantitative reverse transcriptase polymerase chain reaction on cobas z480 [Roche Molecular Diagnostics, Pleasanton, CA]), and no more than two prior lines of chemotherapy. After investigators' selection of the chemotherapy backbone (single-agent topotecan, weekly paclitaxel, or gemcitabine), patients were randomly assigned to also receive either placebo or pertuzumab (840-mg loading dose followed by 420 mg every 3 weeks). Stratification factors were selected chemotherapy, prior antiangiogenic therapy, and platinum-free interval. The primary end point was independent review committee-assessed progression-free survival (PFS). Additional end points included overall survival, investigator-assessed PFS, objective response rate, safety, patient-reported outcomes, and translational research. Overall, 156 patients were randomly assigned. Adding pertuzumab to chemotherapy did not significantly improve independent review committee-assessed PFS for the primary analysis (stratified hazard ratio, 0.74; 95% CI, 0.50 to 1.11; P = .14; median PFS, 4.3 months for pertuzumab plus chemotherapy v 2.6 months for placebo plus chemotherapy). Sensitivity analyses and secondary efficacy end point results were consistent with the primary analysis. The effect on PFS favoring pertuzumab was more pronounced in the gemcitabine and paclitaxel cohorts. No new safety signals were seen. Although the primary objective was not met, subgroup analyses showed trends in PFS favoring pertuzumab in the gemcitabine and paclitaxel cohorts, meriting further exploration of pertuzumab in ovarian cancer. © 2016 by American Society of Clinical Oncology.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekar, Kursat B; Miller, Thomas Martin; Patton, Bruce W

    The characteristic X-rays produced by the interactions of the electron beam with the sample in a scanning electron microscope (SEM) are usually captured with a variable-energy detector, a process termed energy dispersive spectrometry (EDS). The purpose of this work is to exploit inverse simulations of SEM-EDS spectra to enable rapid determination of sample properties, particularly elemental composition. This is accomplished using penORNL, a modified version of PENELOPE, and a modified version of the traditional Levenberg Marquardt nonlinear optimization algorithm, which together is referred to as MOZAIK-SEM. The overall conclusion of this work is that MOZAIK-SEM is a promising method formore » performing inverse analysis of X-ray spectra generated within a SEM. As this methodology exists now, MOZAIK-SEM has been shown to calculate the elemental composition of an unknown sample within a few percent of the actual composition.« less

  13. X-ray energy optimization in minibeam radiation therapy.

    PubMed

    Prezado, Y; Thengumpallil, S; Renier, M; Bravin, A

    2009-11-01

    The purpose of this work is to assess which energy in minibeam radiation therapy provides the best compromise between the deposited dose in the tumor and the sparing of the healthy tissues. Monte Carlo simulations (PENELOPE 2006) have been used as a method to calculate the ratio of the peak-to-valley doses (PVDR) in the healthy tissues and in the tumor for different beam energies. The maximization of the ratio of PVDR in the healthy tissues and in the tumor has been used as a criterion. The main result of this work is that, for the parameters being used in preclinical trials (minibeam sizes of 600 microm and 1200 microm center-to-center separation), the optimum beam energy is 375 keV. The conclusion is that this is the energy of minibeams that should be used in the preclinical studies.

  14. Calculation of electron Dose Point Kernel in water with GEANT4 for medical application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.

    2009-06-03

    The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less

  15. Geant4 Monte Carlo simulation of energy loss and transmission and ranges for electrons, protons and ions

    NASA Astrophysics Data System (ADS)

    Ivantchenko, Vladimir

    Geant4 is a toolkit for Monte Carlo simulation of particle transport originally developed for applications in high-energy physics with the focus on experiments at the Large Hadron Collider (CERN, Geneva). The transparency and flexibility of the code has spread its use to other fields of research, e.g. radiotherapy and space science. The tool provides possibility to simulate complex geometry, transportation in electric and magnetic fields and variety of physics models of interaction of particles with media. Geant4 has been used for simulation of radiation effects for number of space missions. Recent upgrades of the toolkit released in December 2009 include new model for ion electronic stopping power based on the revised version of ICRU'73 Report increasing accuracy of simulation of ion transport. In the current work we present the status of Geant4 electromagnetic package for simulation of particle energy loss, ranges and transmission. This has a direct implication for simulation of ground testing setups at existing European facilities and for simulation of radiation effects in space. A number of improvements were introduced for electron and proton transport, followed by a thorough validation. It was the aim of the present study to validate the range against reference data from the United States National Institute of Standards and Technologies (NIST) ESTAR, PSTAR and ASTAR databases. We compared Geant4 and NIST ranges of electrons using different Geant4 models. The best agreement was found for Penelope, except at very low energies in heavy materials, where the Standard package gave better results. Geant4 proton ranges in water agreed with NIST within 1 The validation of the new ion model is performed against recent data on Bragg peak position in water. The data from transmission of carbon ions via various absorbers following Bragg peak in water demonstrate that the new Geant4 model significantly improves precision of ion range. The absolute accuracy of ion range achieved is on level of 1

  16. A new transmission methodology for quality assurance in radiotherapy based on radiochromic film measurements

    PubMed Central

    do Amaral, Leonardo L.; Pavoni, Juliana F.; Sampaio, Francisco; Netto, Thomaz Ghilardi

    2015-01-01

    Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source‐to‐detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were −1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N‐ PMID:26699306

  17. COPPER-64 Production Studies with Natural Zinc Targets at Deuteron Energy up to 19 Mev and Proton Energy from 141 Down to 31 Mev

    NASA Astrophysics Data System (ADS)

    Bonardi, Mauro L.; Birattari, Claudio; Groppi, Flavia; Song Mainard, Hae; Zhuikov, Boris L.; Kokhanyuk, Vladimir M.; Lapshina, Elena V.; Mebel, Michail V.; Menapace, Enzo

    2004-07-01

    High specific activity no-carrier-added 64Cu is a β-/β+ emitting radionuclide of increasing interest for PET imaging, as well as systemic and targeted radioimmunotherapy of tumors. Its peculiarity of intense Auger emitter is still under investigation. The cross-sections for production of 64Cu from Zn target of natural isotopic composition were measured in the deuteron energy range from threshold up to 19 MeV and proton energy range from 141 down to 31 MeV. The stacked-foil technique was used at both K=38 cyclotron of JRC-Ispra of CEC, Italy and 160 MeV intersection point of INR proton-LINAC in Troitsk, Russia. Several Ga, Zn, Cu, Ni, Co, V, Fe and Mn radionuclides were detected in Zn targets at the EOB. Optimized irradiation conditions are reported as a function of deuteron energy and energy loss into the Zn target, as well as target irradiation time and cooling time after radiochemistry. The activity of n.c.a. 64Cu was measured through its only γ emission of 1346 keV (i.e. 0.473 % intensity) both by instrumental and radiochemical methods, due to the non-specificity of annihilation radiation at 511 keV. To this last purpose, it was necessary to carry out a selective radiochemical separation of GaIII radionuclides by liquid/liquid extraction from the bulk of irradiated Zn targets and other spallation products, which remained in the 7 M HCl aqueous phase. Anion exchange chromatography tests had been carried out to separate the 64Cu from all others radionuclides in n.c.a. form. Theoretical calculations of cross-sections were performed with codes EMPIRE II and PENELOPE for deuteron reactions and CEF model and HMS-ALICE hybrid model for proton reactions. The theoretical results are presented and compared with the experimental values.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-Rovira, I., E-mail: immamartinez@gmail.com; Prezado, Y.; Fois, G.

    Purpose: Spatial fractionation of the dose has proven to be a promising approach to increase the tolerance of healthy tissue, which is the main limitation of radiotherapy. A good example of that is GRID therapy, which has been successfully used in the management of large tumors with low toxicity. The aim of this work is to explore new avenues using nonconventional sources: GRID therapy by using kilovoltage (synchrotron) x-rays, the use of very high-energy electrons, and proton GRID therapy. They share in common the use of the smallest possible grid sizes in order to exploit the dose–volume effects. Methods: Montemore » Carlo simulations (PENELOPE/PENEASY and GEANT4/GATE codes) were used as a method to study dose distributions resulting from irradiations in different configurations of the three proposed techniques. As figure of merit, percentage (peak and valley) depth dose curves, penumbras, and central peak-to-valley dose ratios (PVDR) were evaluated. As shown in previous biological experiments, high PVDR values are requested for healthy tissue sparing. A superior tumor control may benefit from a lower PVDR. Results: High PVDR values were obtained in the healthy tissue for the three cases studied. When low energy photons are used, the treatment of deep-seated tumors can still be performed with submillimetric grid sizes. Superior PVDR values were reached with the other two approaches in the first centimeters along the beam path. The use of protons has the advantage of delivering a uniform dose distribution in the tumor, while healthy tissue benefits from the spatial fractionation of the dose. In the three evaluated techniques, there is a net reduction in penumbra with respect to radiosurgery. Conclusions: The high PVDR values in the healthy tissue and the use of small grid sizes in the three presented approaches might constitute a promising alternative to treat tumors with such spatially fractionated radiotherapy techniques. The dosimetric results presented here support the interest of performing radiobiology experiments in order to evaluate these new avenues.« less

  19. SU-G-TeP3-13: The Role of Nanoscale Energy Deposition in the Development of Gold Nanoparticle-Enhanced Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkby, C; The University of Calgary, Calgary, AB; Koger, B

    2016-06-15

    Purpose: Gold nanoparticles (GNPs) can enhance radiotherapy effects. The high photoelectric cross section of gold relative to tissue, particularly at lower energies, leads to localized dose enhancement. However in a clinical context, photon energies must also be sufficient to reach a target volume at a given depth. These properties must be balanced to optimize such a therapy. Given that nanoscale energy deposition patterns around GNPs play a role in determining biological outcomes, in this work we seek to establish their role in this optimization process. Methods: The PENELOPE Monte Carlo code was used to generate spherical dose deposition kernels inmore » 1000 nm diameter spheres around 50 nm diameter GNPs in response to monoenergetic photons incident on the GNP. Induced “lesions” were estimated by either a local effect model (LEM) or a mean dose model (MDM). The ratio of these estimates was examined for a range of photon energies (10 keV to 2 MeV), for three sets of linear-quadratic parameters. Results: The models produce distinct differences in expected lesion values, the lower the alpha-beta ratio, the greater the difference. The ratio of expected lesion values remained constant within 5% for energies of 40 keV and above across all parameter sets and rose to a difference of 35% for lower energies only for the lowest alpha-beta ratio. Conclusion: Consistent with other work, these calculations suggest nanoscale energy deposition patterns matter in predicting biological response to GNP-enhanced radiotherapy. However the ratio of expected lesions between the different models is largely independent of energy, indicating that GNP-enhanced radiotherapy scenarios can be optimized in photon energy without consideration of the nanoscale patterns. Special attention may be warranted for energies of 20 keV or below and low alpha-beta ratios.« less

  20. Impact of cardiosynchronous brain pulsations on Monte Carlo calculated doses for synchrotron micro- and minibeam radiation therapy.

    PubMed

    Manchado de Sola, Francisco; Vilches, Manuel; Prezado, Yolanda; Lallena, Antonio M

    2018-05-15

    The purpose of this study was to assess the effects of brain movements induced by heartbeat on dose distributions in synchrotron micro- and minibeam radiation therapy and to develop a model to help guide decisions and planning for future clinical trials. The Monte Carlo code PENELOPE was used to simulate the irradiation of a human head phantom with a variety of micro- and minibeam arrays, with beams narrower than 100 μm and above 500 μm, respectively, and with radiation fields of 1 × 2 cm and 2 × 2 cm. The dose in the phantom due to these beams was calculated by superposing the dose profiles obtained for a single beam of 1 μm × 2 cm. A parameter δ, accounting for the total displacement of the brain during the irradiation and due to the cardiosynchronous pulsation, was used to quantify the impact on peak-to-valley dose ratios and the full width at half maximum. The difference between the maximum (at the phantom entrance) and the minimum (at the phantom exit) values of the peak-to-valley dose ratio reduces when the parameter δ increases. The full width at half maximum remains almost constant with depth for any δ value. Sudden changes in the two quantities are observed at the interfaces between the various tissues (brain, skull, and skin) present in the head phantom. The peak-to-valley dose ratio at the center of the head phantom reduces when δ increases, remaining above 70% of the static value only for minibeams and δ smaller than ∼200 μm. Optimal setups for brain treatments with synchrotron radiation micro- and minibeam combs depend on the brain displacement due to cardiosynchronous pulsation. Peak-to-valley dose ratios larger than 90% of the maximum values obtained in the static case occur only for minibeams and relatively large dose rates. © 2018 American Association of Physicists in Medicine.

  1. Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.

    PubMed

    Franchi, G; Angulo, J; Moreaud, M; Sorbier, L

    2018-01-01

    The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  2. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.

  3. Dosimetric characterization of the (60)Co BEBIG Co0.A86 high dose rate brachytherapy source using PENELOPE.

    PubMed

    Guerrero, Rafael; Almansa, Julio F; Torres, Javier; Lallena, Antonio M

    2014-12-01

    (60)Co sources are being used as an alternative to (192)Ir sources in high dose rate brachytherapy treatments. In a recent document from AAPM and ESTRO, a consensus dataset for the (60)Co BEBIG (model Co0.A86) high dose rate source was prepared by using results taken from different publications due to discrepancies observed among them. The aim of the present work is to provide a new calculation of the dosimetric characteristics of that (60)Co source according to the recommendations of the AAPM and ESTRO report. Radial dose function, anisotropy function, air-kerma strength, dose rate constant and absorbed dose rate in water have been calculated and compared to the results of previous works. Simulations using the two different geometries considered by other authors have been carried out and the effect of the cable density and length has been studied. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. An update on the analysis of the Princeton 19Ne beta asymmetry measurement

    NASA Astrophysics Data System (ADS)

    Combs, Dustin; Calaprice, Frank; Jones, Gordon; Pattie, Robert; Young, Albert

    2013-10-01

    We report on the progress of a new analysis of the 1994 19Ne beta asymmetry measurement conducted at Princeton University. In this experiment, a beam of 19Ne atoms were polarized with a Stern-Gerlach magnet and then entered a thin-walled mylar cell through a slit fabricated from a piece of micro channel plate. A pair of Si(Li) detectors at either end of the apparatus were aligned with the direction of spin polarization (one parallel and one anti-parallel to the spin of the 19Ne) and detected positrons from the decays. The difference in the rate in the two detectors was used to calculate the asymmetry. A new analysis procedure has been undertaken using the Monte Carlo package PENELOPE with the goal of determining the systematic uncertainty due to positrons scattering from the face of the detectors causing the incorrect reconstruction of the initial direction of the positron momentum. This was a leading cause of systematic uncertainty in the experiment in 1994.

  5. On the suitability of ultrathin detectors for absorbed dose assessment in the presence of high-density heterogeneities.

    PubMed

    Bueno, M; Carrasco, P; Jornet, N; Muñoz-Montplet, C; Duch, M A

    2014-08-01

    The aim of this study was to evaluate the suitability of several detectors for the determination of absorbed dose in bone. Three types of ultrathin LiF-based thermoluminescent dosimeters (TLDs)-two LiF:Mg,Cu,P-based (MCP-Ns and TLD-2000F) and a (7)Li-enriched LiF:Mg,Ti-based (MTS-7s)-as well as EBT2 Gafchromic films were used to measure percentage depth-dose distributions (PDDs) in a water-equivalent phantom with a bone-equivalent heterogeneity for 6 and 18 MV and a set of field sizes ranging from 5 x 5 cm2 to 20 x 20 cm2. MCP-Ns, TLD-2000F, MTS-7s, and EBT2 have active layers of 50, 20, 50, and 30 μm, respectively. Monte Carlo (MC) dose calculations (PENELOPE code) were used as the reference and helped to understand the experimental results and to evaluate the potential perturbation of the fluence in bone caused by the presence of the detectors. The energy dependence and linearity of the TLDs' response was evaluated. TLDs exhibited flat energy responses (within 2.5%) and linearity with dose (within 1.1%) within the range of interest for the selected beams. The results revealed that all considered detectors perturb the electron fluence with respect to the energy inside the bone-equivalent material. MCP-Ns and MTS-7s underestimated the absorbed dose in bone by 4%-5%. EBT2 exhibited comparable accuracy to MTS-7s and MCP-Ns. TLD-2000F was able to determine the dose within 2% accuracy. No dependence on the beam energy or field size was observed. The MC calculations showed that a[Formula: see text] thick detector can provide reliable dose estimations in bone regardless of whether it is made of LiF, water or EBT's active layer material. TLD-2000F was found to be suitable for providing reliable absorbed dose measurements in the presence of bone for high-energy x-ray beams.

  6. MO-F-CAMPUS-J-04: Radiation Heat Load On the MR System of the Elekta Atlantic System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Towe, S; Roberts, D; Overweg, J

    2015-06-15

    Purpose: The Elekta Atlantic system combines a digital linear accelerator system with a 1.5T Philips MRI machine.This study aimed to assess the energy deposited within the cryostat system when the radiation beam passes through the cryostat. The cryocooler on the magnet has a cooling capacity which is about 1 Watt in excess of the cryogenic heat leak into the magnet’s cold mass. A pressure-controlled heater inside the magnet balances the excess refrigeration power such that the helium pressure in the tank is kept slightly above ambient air pressure. If radiation power is deposited in the cold mass then this heatermore » will need less power to maintain pressure equilibrium and if the radiation heat load exceeds the excess cryocooler capacity the pressure will rise. Methods: An in-house CAD based Monte Carlo code based on Penelope was used to model the entire MR-Linac system to quantify the heat load on the magnet’s cold mass. These results were then compared to experimental results obtained from an Elekta Atlantic system installed in UMC-Utrecht. Results: For a field size of 25 cm x 22 cm and a dose rate of 107 mu.min-1, the energy deposited by the radiation beam led to a reduction in heater power from 1.16 to 0.73 W. Simulations predicted a reduction to 0.69 W which is in good agreement. For the worst case field size (largest) and maximum dose rate the cryostat cooler capacity was exceeded. This resulted in a pressure rise within the system but was such that continuous irradiation for over 12 hours would be required before the magnet would start blowing off helium. Conclusion: The study concluded that the Atlantic system does not have to be duty cycle restricted, even for the worst case non-clinical scenario and that there are no adverse effects on the MR system. Stephen Towe and David Roberts Both work for Elekta; Ezra Van Lanen works for Philips Healthcare; Johan Overweg works for Philips Innovative Technologies.« less

  7. SU-G-IeP2-04: Dosimetric Accuracy of a Monte Carlo-Based Tool for Cone-Beam CT Organ Dose Calculation: Validation Against OSL and XRQA2 Film Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chesneau, H; Lazaro, D; Blideanu, V

    Purpose: The intensive use of Cone-Beam Computed Tomography (CBCT) during radiotherapy treatments raise some questions about the dose to healthy tissues delivered during image acquisitions. We hence developed a Monte Carlo (MC)-based tool to predict doses to organs delivered by the Elekta XVI kV-CBCT. This work aims at assessing the dosimetric accuracy of the MC tool, in all tissue types. Methods: The kV-CBCT MC model was developed using the PENELOPE code. The beam properties were validated against measured lateral and depth dose profiles in water, and energy spectra measured with a CdTe detector. The CBCT simulator accuracy then required verificationmore » in clinical conditions. For this, we compared calculated and experimental dose values obtained with OSL nanoDots and XRQA2 films inserted in CIRS anthropomorphic phantoms (male, female, and 5-year old child). Measurements were performed at different locations, including bone and lung structures, and for several acquisition protocols: lung, head-and-neck, and pelvis. OSLs and film measurements were corrected when possible for energy dependence, by taking into account for spectral variations between calibration and measurement conditions. Results: Comparisons between measured and MC dose values are summarized in table 1. A mean difference of 8.6% was achieved for OSLs when the energy correction was applied, and 89.3% of the 84 dose points were within uncertainty intervals, including those in bones and lungs. Results with XRQA2 are not as good, because incomplete information about electronic equilibrium in film layers hampered the application of a simple energy correction procedure. Furthermore, measured and calculated doses (Fig.1) are in agreement with the literature. Conclusion: The MC-based tool developed was validated with an extensive set of measurements, and enables the organ dose calculation with accuracy. It can now be used to compute and report doses to organs for clinical cases, and also to drive strategies to optimize imaging protocols.« less

  8. Design and characterization of a new high-dose-rate brachytherapy Valencia applicator for larger skin lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candela-Juan, C., E-mail: ccanjuan@gmail.com; Niatsetski, Y.; Laarse, R. van der

    Purpose: The aims of this study were (i) to design a new high-dose-rate (HDR) brachytherapy applicator for treating surface lesions with planning target volumes larger than 3 cm in diameter and up to 5 cm in size, using the microSelectron-HDR or Flexitron afterloader (Elekta Brachytherapy) with a {sup 192}Ir source; (ii) to calculate by means of the Monte Carlo (MC) method the dose distribution for the new applicator when it is placed against a water phantom; and (iii) to validate experimentally the dose distributions in water. Methods: The PENELOPE2008 MC code was used to optimize dwell positions and dwell times.more » Next, the dose distribution in a water phantom and the leakage dose distribution around the applicator were calculated. Finally, MC data were validated experimentally for a {sup 192}Ir mHDR-v2 source by measuring (i) dose distributions with radiochromic EBT3 films (ISP); (ii) percentage depth–dose (PDD) curve with the parallel-plate ionization chamber Advanced Markus (PTW); and (iii) absolute dose rate with EBT3 films and the PinPoint T31016 (PTW) ionization chamber. Results: The new applicator is made of tungsten alloy (Densimet) and consists of a set of interchangeable collimators. Three catheters are used to allocate the source at prefixed dwell positions with preset weights to produce a homogenous dose distribution at the typical prescription depth of 3 mm in water. The same plan is used for all available collimators. PDD, absolute dose rate per unit of air kerma strength, and off-axis profiles in a cylindrical water phantom are reported. These data can be used for treatment planning. Leakage around the applicator was also scored. The dose distributions, PDD, and absolute dose rate calculated agree within experimental uncertainties with the doses measured: differences of MC data with chamber measurements are up to 0.8% and with radiochromic films are up to 3.5%. Conclusions: The new applicator and the dosimetric data provided here will be a valuable tool in clinical practice, making treatment of large skin lesions simpler, faster, and safer. Also the dose to surrounding healthy tissues is minimal.« less

  9. SU-F-J-147: Magnetic Field Dose Response Considerations for a Linac Monitor Chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, M; Fallone, B

    Purpose: The impact of magnetic fields on the readings of a linac monitor chamber have not yet been investigated. Herein we examine the total dose response as well as any deviations in the beam parameters of flatness and symmetry when a Varian monitor chamber is irradiated within an applied magnetic field. This work has direct application to the development of Linac-MR systems worldwide. Methods: A Varian monitor chamber was modeled in the Monte Carlo code PENELOPE and irradiated in the presence of a magnetic field with a phase space generated from a model of a Linac-MR prototype system. The magneticmore » field strength was stepped from 0 to 3.0T in both parallel and perpendicular directions with respect to the normal surface of the phase space. Dose to each of the four regions in the monitor chamber were scored separately for every magnetic field adaptation to evaluate the effect of the magnetic field on flatness and symmetry. Results: When the magnetic field is perpendicular to the phase space normal we see a change in dose response with a maximal deviation (10–25% depending on the chamber region) near 0.75T. In the direction of electron deflection we expectedly see opposite responses in chamber regions leading to a measured asymmetry. With a magnetic field parallel to the phase space normal we see no measured asymmetries, however there is a monotonic rise in dose response leveling off at about +12% near 2.5T. Conclusion: Attention must be given to correct for the strength and direction of the magnetic field at the location of the linac monitor chamber in hybrid Linac-MR devices. Elsewise the dose sampled by these chambers may not represent the actual dose expected at isocentre; additionally there may be a need to correct for the symmetry of the beam recorded by the monitor chamber. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization).« less

  10. On the uncertainties of photon mass energy-absorption coefficients and their ratios for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Andreo, Pedro; Burns, David T.; Salvat, Francesc

    2012-04-01

    A systematic analysis of the available data has been carried out for mass energy-absorption coefficients and their ratios for air, graphite and water for photon energies between 1 keV and 2 MeV, using representative kilovoltage x-ray spectra for mammography and diagnostic radiology below 100 kV, and for 192Ir and 60Co gamma-ray spectra. The aim of this work was to establish ‘an envelope of uncertainty’ based on the spread of the available data. Type A uncertainties were determined from the results of Monte Carlo (MC) calculations with the PENELOPE and EGSnrc systems, yielding mean values for µen/ρ with a given statistical standard uncertainty. Type B estimates were based on two groupings. The first grouping consisted of MC calculations based on a similar implementation but using different data and/or approximations. The second grouping was formed by various datasets, obtained by different authors or methods using the same or different basic data, and with different implementations (analytical, MC-based, or a combination of the two); these datasets were the compilations of NIST, Hubbell, Johns-Cunningham, Attix and Higgins, plus MC calculations with PENELOPE and EGSnrc. The combined standard uncertainty, uc, for the µen/ρ values for the mammography x-ray spectra is 2.5%, decreasing gradually to 1.6% for kilovoltage x-ray spectra up to 100 kV. For 60Co and 192Ir, uc is approximately 0.1%. The Type B uncertainty analysis for the ratios of µen/ρ values includes four methods of analysis and concludes that for the present data the assumption that the data interval represents 95% confidence limits is a good compromise. For the mammography x-ray spectra, the combined standard uncertainties of (µen/ρ)graphite,air and (µen/ρ)graphite,water are 1.5%, and 0.5% for (µen/ρ)water,air, decreasing gradually down to uc = 0.1% for the three µen/ρ ratios for the gamma-ray spectra. The present estimates are shown to coincide well with those of Hubbell (1977 Rad. Res. 70 58-81), except for the lowest energy range (radiodiagnostic) where it is concluded that current databases and their systematic analysis represent an improvement over the older Hubbell estimations. The results for (µen/ρ)graphite,air for the gamma-ray dosimetry range are moderately higher than those of Seltzer and Bergstrom (2005 private communication).

  11. Physical characterization of single convergent beam device for teletherapy: theoretical and Monte Carlo approach.

    PubMed

    Figueroa, R G; Valente, M

    2015-09-21

    The main purpose of this work is to determine the feasibility and physical characteristics of a new teletherapy device of radiation therapy based on the application of a convergent x-ray beam of energies like those used in radiotherapy providing highly concentrated dose delivery to the target. We have denominated it Convergent Beam Radio Therapy (CBRT). Analytical methods are developed first in order to determine the dosimetry characteristic of an ideal convergent photon beam in a hypothetical water phantom. Then, using the PENELOPE Monte Carlo code, a similar convergent beam that is applied to the water phantom is compared with that of the analytical method. The CBRT device (Converay(®)) is designed to adapt to the head of LINACs. The converging beam photon effect is achieved thanks to the perpendicular impact of LINAC electrons on a large thin spherical cap target where Bremsstrahlung is generated (high-energy x-rays). This way, the electrons impact upon various points of the cap (CBRT condition), aimed at the focal point. With the X radiation (Bremsstrahlung) directed forward, a system of movable collimators emits many beams from the output that make a virtually definitive convergent beam. Other Monte Carlo simulations are performed using realistic conditions. The simulations are performed for a thin target in the shape of a large, thin, spherical cap, with an r radius of around 10-30 cm and a curvature radius of approximately 70 to 100 cm, and a cubed water phantom centered in the focal point of the cap. All the interaction mechanisms of the Bremsstrahlung radiation with the phantom are taken into consideration for different energies and cap thicknesses. Also, the magnitudes of the electric and/or magnetic fields, which are necessary to divert clinical-use electron beams (0.1 to 20 MeV), are determined using electromagnetism equations with relativistic corrections. This way the above-mentioned beam is manipulated and guided for its perpendicular impact upon the spherical cap. The first results that were achieved show in-depth dose peaks, having shapes qualitatively similar to those from hadrontherapy techniques. The obtained results demonstrate that in-depth dose peaks are generated at the focus point or isocenter. These results are consistent with those obtained with Monte Carlo codes. The peak-focus is independent of the energy of the photon beam, though its intensity is not. The realistic results achieved with the Monte Carlo code show that the Bremsstrahlung generated on the thin cap is mainly directed towards the focus point. The aperture angle at each impact point depends primarily on the energy beam, the atomic number Z and the thickness of the target. There is also a poly-collimator coaxial to the cap or ring with many holes, permitting a clean convergent-exit x-ray beam with a dose distribution that is similar to the ideal case. The electric and magnetic fields needed to control the deflection of the electron beams in the CBRT geometry are highly feasible using specially designed electric and/or magnetic devices that, respectively, have voltage and current values that are technically achievable. However, it was found that magnetic devices represent a more suitable option for electron beam control, especially at high energies. The main conclusion is that the development of such a device is feasible. Due to its features, this technology might be considered a powerful new tool for external radiotherapy with photons.

  12. An implementation of discrete electron transport models for gold in the Geant4 simulation toolkit

    NASA Astrophysics Data System (ADS)

    Sakata, D.; Incerti, S.; Bordage, M. C.; Lampe, N.; Okada, S.; Emfietzoglou, D.; Kyriakou, I.; Murakami, K.; Sasaki, T.; Tran, H.; Guatelli, S.; Ivantchenko, V. N.

    2016-12-01

    Gold nanoparticle (GNP) boosted radiation therapy can enhance the biological effectiveness of radiation treatments by increasing the quantity of direct and indirect radiation-induced cellular damage. As the physical effects of GNP boosted radiotherapy occur across energy scales that descend down to 10 eV, Monte Carlo simulations require discrete physics models down to these very low energies in order to avoid underestimating the absorbed dose and secondary particle generation. Discrete physics models for electron transportation down to 10 eV have been implemented within the Geant4-DNA low energy extension of Geant4. Such models allow the investigation of GNP effects at the nanoscale. At low energies, the new models have better agreement with experimental data on the backscattering coefficient, and they show similar performance for transmission coefficient data as the Livermore and Penelope models already implemented in Geant4. These new models are applicable in simulations focussed towards estimating the relative biological effectiveness of radiation in GNP boosted radiotherapy applications with photon and electron radiation sources.

  13. A deep-branching clade of retrovirus-like retrotransposons in bdelloid rotifers

    PubMed Central

    Gladyshev, Eugene A.; Meselson, Matthew; Arkhipova, Irina R.

    2007-01-01

    Rotifers of class Bdelloidea, a group of aquatic invertebrates in which males and meiosis have never been documented, are also unusual in their lack of multicopy LINE-like and gypsy-like retrotransposons, groups inhabiting the genomes of nearly all other metazoans. Bdelloids do contain numerous DNA transposons, both intact and decayed, and domesticated Penelope-like retroelements Athena, concentrated at telomeric regions. Here we describe two LTR retrotransposons, each found at low copy number in a different bdelloid species, which define a clade different from previously known clades of LTR retrotransposons. Like bdelloid DNA transposons and Athena, these elements have been found preferentially in telomeric regions. Unlike bdelloid DNA transposons, many of which are decayed, the newly described elements, named Vesta and Juno, inhabiting the genomes of Philodina roseola and Adineta vaga, respectively, appear to be intact and to represent recent insertions, possibly from an exogenous source. We describe the retrovirus-like structure of the new elements, containing gag, pol, and env-like open reading frames, and discuss their possible origins, transmission, and behavior in bdelloid genomes. PMID:17129685

  14. Evaluation of the new electron-transport algorithm in MCNP6.1 for the simulation of dose point kernel in water

    NASA Astrophysics Data System (ADS)

    Antoni, Rodolphe; Bourgois, Laurent

    2017-12-01

    In this work, the calculation of specific dose distribution in water is evaluated in MCNP6.1 with the regular condensed history algorithm the "detailed electron energy-loss straggling logic" and the new electrons transport algorithm proposed the "single event algorithm". Dose Point Kernel (DPK) is calculated with monoenergetic electrons of 50, 100, 500, 1000 and 3000 keV for different scoring cells dimensions. A comparison between MCNP6 results and well-validated codes for electron-dosimetry, i.e., EGSnrc or Penelope, is performed. When the detailed electron energy-loss straggling logic is used with default setting (down to the cut-off energy 1 keV), we infer that the depth of the dose peak increases with decreasing thickness of the scoring cell, largely due to combined step-size and boundary crossing artifacts. This finding is less prominent for 500 keV, 1 MeV and 3 MeV dose profile. With an appropriate number of sub-steps (ESTEP value in MCNP6), the dose-peak shift is almost complete absent to 50 keV and 100 keV electrons. However, the dose-peak is more prominent compared to EGSnrc and the absorbed dose tends to be underestimated at greater depths, meaning that boundaries crossing artifact are still occurring while step-size artifacts are greatly reduced. When the single-event mode is used for the whole transport, we observe the good agreement of reference and calculated profile for 50 and 100 keV electrons. Remaining artifacts are fully vanished, showing a possible transport treatment for energies less than a hundred of keV and accordance with reference for whatever scoring cell dimension, even if the single event method initially intended to support electron transport at energies below 1 keV. Conversely, results for 500 keV, 1 MeV and 3 MeV undergo a dramatic discrepancy with reference curves. These poor results and so the current unreliability of the method is for a part due to inappropriate elastic cross section treatment from the ENDF/B-VI.8 library in those energy ranges. Accordingly, special care has to be taken in setting choice for calculating electron dose distribution with MCNP6, in particular with regards to dosimetry or nuclear medicine applications.

  15. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    PubMed

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  16. SU-F-J-149: Beam and Cryostat Scatter Characteristics of the Elekta MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duglio, M; Towe, S; Roberts, D

    2016-06-15

    Purpose: The Elekta MR-Linac combines a digital linear accelerator system with a 1.5T Philips MRI machine. This study aimed to determine key characteristic information regarding the MR-Linac beam and in particular it evaluated the effect of the MR cryostat on the out of field scatter dose. Methods: Tissue phantom ratios, profiles and depth doses were acquired in plastic water with an IC-profiler or with an MR compatible water tank using multiple system configurations (Full (B0= 1.5T), Full (B0=0T) and No cryostat). Additionally, an in-house CAD based Monte Carlo code based on Penelope was used to provide comparative data. Results: Withmore » the cryostat in place and B0=0T, the measured TPR for the MR Linac system was 0.702, indicating an energy of around 7MV. Without the cryostat, the measured TPR was 0.669. For the Full (B0=0T) case, out of field dose at a depth of 10 cm in the isocentric plane, 5 cm from the field edge was 0.8%, 3.1% and 5.4% for 3×3 cm{sup 2}, 10×10 cm{sup 2} and 20×20 cm{sup 2} fields respectively.The out of field dose (averaged between 5 cm and 10 cm beyond the field edges) in the “with cryostat” case is 0.78% (absolute difference) higher than without the cryostat for clinically relevant field sizes (i.e. 10×10 cm{sup 2}) and comparable to measured conventional 6MV treatment beams at a depth of 10 cm (within 0.1% between 5 cm and 6 cm from field edge). At dose maximum and at 5 cm from the field edge, the “with cryostat” out of field scatter for a 10×10 cm{sup 2} field is 1.5% higher than “without cryostat', with a modest increase (0.9%) compared to Agility 6MV in the same conditions. Conclusion: The study has presented typical characteristics of the MR-Linac beam and determined that out of field dose is comparable to conventional treatment beams. All authors are employed by Elekta Ltd., who are developing an MR-Linac.« less

  17. Targeting mitochondria in cancer cells using gold nanoparticle-enhanced radiotherapy: A Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkby, Charles, E-mail: charles.kirkby@albertahealthservices.ca; Ghasroddashti, Esmaeel; Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4

    2015-02-15

    Purpose: Radiation damage to mitochondria has been shown to alter cellular processes and even lead to apoptosis. Gold nanoparticles (AuNPs) may be used to enhance these effects in scenarios where they collect on the outer membranes of mitochondria. A Monte Carlo (MC) approach is used to estimate mitochondrial dose enhancement under a variety of conditions. Methods: The PENELOPE MC code was used to generate dose distributions resulting from photons striking a 13 nm diameter AuNP with various thicknesses of water-equivalent coatings. Similar dose distributions were generated with the AuNP replaced by water so as to estimate the gain in dosemore » on a microscopic scale due to the presence of AuNPs within an irradiated volume. Models of mitochondria with AuNPs affixed to their outer membrane were then generated—considering variation in mitochondrial size and shape, number of affixed AuNPs, and AuNP coating thickness—and exposed (in a dose calculation sense) to source spectra ranging from 6 MV to 90 kVp. Subsequently dose enhancement ratios (DERs), or the dose with the AuNPs present to that for no AuNPs, for the entire mitochondrion and its components were tallied under these scenarios. Results: For a representative case of a 1000 nm diameter mitochondrion affixed with 565 AuNPs, each with a 13 nm thick coating, the mean DER over the whole organelle ranged from roughly 1.1 to 1.6 for the kilovoltage sources, but was generally less than 1.01 for the megavoltage sources. The outer membrane DERs remained less than 1.01 for the megavoltage sources, but rose to 2.3 for 90 kVp. The voxel maximum DER values were as high as 8.2 for the 90 kVp source and increased further when the particles clustered together. The DER exhibited dependence on the mitochondrion dimensions, number of AuNPs, and the AuNP coating thickness. Conclusions: Substantial dose enhancement directly to the mitochondria can be achieved under the conditions modeled. If the mitochondrion dose can be directly enhanced, as these simulations show, this work suggests the potential for both a tool to study the role of mitochondria in cellular response to radiation and a novel avenue for radiation therapy in that the mitochondria may be targeted, rather than the nuclear DNA.« less

  18. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  19. SU-G-201-02: Application of RayStretch in Clinical Cases: A Calculation for Heterogeneity Corrections in LDR Permanent I-125 Prostate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hueso-Gonzalez, F; Vijande, J; Ballester, F

    Purpose: Tissue heterogeneities and calcifications have significant impact on the dosimetry of low energy brachytherapy (BT). RayStretch is an analytical algorithm developed in our institution to incorporate heterogeneity corrections in LDR prostate brachytherapy. The aim of this work is to study its application in clinical cases by comparing its predictions with the results obtained with TG-43 and Monte Carlo (MC) simulations. Methods: A clinical implant (71 I-125 seeds, 15 needles) from a real patient was considered. On this patient, different volumes with calcifications were considered. Its properties were evaluated in three ways by i) the Treatment planning system (TPS) (TG-43),more » ii) a MC study using the Penelope2009 code, and iii) RayStretch. To analyse the performance of RayStretch, calcifications located in the prostate lobules covering 11% of the total prostate volume and larger calcifications located in the lobules and underneath the urethra for a total occupied volume of 30% were considered. Three mass densities (1.05, 1.20, and 1.35 g/cm3) were explored for the calcifications. Therefore, 6 different scenarios ranging from small low density calcifications to large high density ones have been discussed. Results: DVH and D90 results given by RayStretch agree within 1% with the full MC simulations. Although no effort has been done to improve RayStretch numerical performance, its present implementation is able to evaluate a clinical implant in a few seconds to the same level of accuracy as a detailed MC calculation. Conclusion: RayStretch is a robust method for heterogeneity corrections in prostate BT supported on TG-43 data. Its compatibility with commercial TPSs and its high calculation speed makes it feasible for use in clinical settings for improving treatment quality. It will allow in a second phase of this project, its use during intraoperative ultrasound planning. This study was partly supported by a fellowship grant from the Spanish Ministry of Education, by the Generalitat Valenciana under Project PROMETEOII/2013/010, by the Spanish Government under Project No. FIS2013-42156 and by the European Commission within the SeventhFramework Program through ENTERVISION (grant agreement number 264552).« less

  20. Genetic Diversity of Avian Paramyxovirus Type 6 Isolated from Wild Ducks in the Republic of Korea.

    PubMed

    Choi, Kang-Seuk; Kim, Ji-Ye; Lee, Hyun-Jeong; Jang, Min-Jun; Kwon, Hyuk-Moo; Sung, Haan-Woo

    2018-03-08

    Eleven avian paramyxovirus type 6 (APMV-6) isolates from Eurasian Wigeon ( n=5; Anas penelope), Mallards ( n=2; Anas platyrhynchos), and unknown species of wild ducks ( n=4) from Korea were analyzed based on the nucleotide (nt) and deduced amino acid (aa) sequences of the fusion (F) gene. Fecal samples were collected in 2010-2014. Genotypes were assigned based on phylogenetic analyses. Our results revealed that APMV-6 could be classified into at least two distinct genotypes, G1 and G2. The open reading frame (ORF) of the G1 genotype was 1,668 nt in length, and the putative F0 cleavage site sequence was 113 PAPEPRL 119 . The G2 genotype viruses included five isolates from Eurasian wigeons and four isolates from unknown waterfowl species, together with two reference APMV-6 strains from the Red-necked Stint ( Calidris ruficollis) from Japan and an unknown duck from Italy. There was an N-truncated ORF (1,638 nt), due to an N-terminal truncation of 30 nt in the signal peptide region of the F gene, and the putative F0 cleavage site sequence was 103 SIREPRL 109 . The genetic diversity and ecology of APMV-6 are discussed.

  1. MAGIC with formaldehyde applied to dosimetry of HDR brachytherapy source

    NASA Astrophysics Data System (ADS)

    Marques; T; Fernandes; J; Barbi; G; Nicolucci; P; Baffa; O

    2009-05-01

    The use of polymer gel dosimeters in brachytherapy can allow the determination of three-dimensional dose distributions in large volumes and with high spatial resolution if an adequate calibration process is performed. One of the major issues in these experiments is the polymer gel response dependence on dose rate when high dose rate sources are used and the doses in the vicinity of the sources are to be determinated. In this study, the response of a modified MAGIC polymer gel with formaldehyde around an Iridium-192 HDR brachytherapy source is presented. Experimental results obtained with this polymer gel were compared with ionization chamber measurements and with Monte Carlo simulation with PENELOPE. A maximum difference of 3.10% was found between gel dose measurements and Monte Carlo simulation at a radial distance of 18 mm from the source. The results obtained show that the gel's response is strongly influenced by dose rate and that a different calibration should be used for the vicinity of the source and for regions of lower dose rates. The results obtained in this study show that, provided the proper calibration is performed, MAGIC with formaldehyde can be successfully used to accurate determinate dose distributions form high dose rate brachytherapy sources.

  2. Formalizing structured file services for the data storage and retrieval subsystem of the data management system for Spacestation Freedom

    NASA Technical Reports Server (NTRS)

    Jamsek, Damir A.

    1993-01-01

    A brief example of the use of formal methods techniques in the specification of a software system is presented. The report is part of a larger effort targeted at defining a formal methods pilot project for NASA. One possible application domain that may be used to demonstrate the effective use of formal methods techniques within the NASA environment is presented. It is not intended to provide a tutorial on either formal methods techniques or the application being addressed. It should, however, provide an indication that the application being considered is suitable for a formal methods by showing how such a task may be started. The particular system being addressed is the Structured File Services (SFS), which is a part of the Data Storage and Retrieval Subsystem (DSAR), which in turn is part of the Data Management System (DMS) onboard Spacestation Freedom. This is a software system that is currently under development for NASA. An informal mathematical development is presented. Section 3 contains the same development using Penelope (23), an Ada specification and verification system. The complete text of the English version Software Requirements Specification (SRS) is reproduced in Appendix A.

  3. Fiber-optic components for optical communicatios and sensing =

    NASA Astrophysics Data System (ADS)

    Marques, Carlos Alberto Ferreira

    Nos ultimos anos, a Optoelectronica tem sido estabelecida como um campo de investigacao capaz de conduzir a novas solucoes tecnologicas. As conquistas abundantes no campo da optica e lasers, bem como em comunicacoes opticas tem sido de grande importancia e desencadearam uma serie de inovacoes. Entre o grande numero de componentes opticos existentes, os componentes baseados em fibra optica sao principalmente relevantes devido a sua simplicidade e a elevada de transporte de dados da fibra optica. Neste trabalho foi focado um destes componentes opticos: as redes de difraccao em fibra optica, as quais tem propriedades opticas de processamento unicas. Esta classe de componentes opticos e extremamente atraente para o desenvolvimento de dispositivos de comunicacoes opticas e sensores. O trabalho comecou com uma analise teorica aplicada a redes em fibra e foram focados os metodos de fabricacao de redes em fibra mais utilizados. A inscricao de redes em fibra tambem foi abordado neste trabalho, onde um sistema de inscricao automatizada foi implementada para a fibra optica de silica, e os resultados experimentais mostraram uma boa aproximacao ao estudo de simulacao. Tambem foi desenvolvido um sistema de inscricao de redes de Bragg em fibra optica de plastico. Foi apresentado um estudo detalhado da modulacao acustico-optica em redes em fibra optica de silica e de plastico. Por meio de uma analise detalhada dos modos de excitacao mecanica aplicadas ao modulador acustico-optico, destacou-se que dois modos predominantes de excitacao acustica pode ser estabelecidos na fibra optica, dependendo da frequencia acustica aplicada. Atraves dessa caracterizacao, foi possivel desenvolver novas aplicacoes para comunicacoes opticas. Estudos e implementacao de diferentes dispositivos baseados em redes em fibra foram realizados, usando o efeito acustico-optico e o processo de regeneracao em fibra optica para varias aplicacoes tais como rapido multiplexador optico add-drop, atraso de grupo sintonizavel de redes de Bragg, redes de Bragg com descolamento de fase sintonizaveis, metodo para a inscricao de redes de Bragg com perfis complexos, filtro sintonizavel para equalizacao de ganho e filtros opticos notch ajustaveis.

  4. A simple modification of TG-43 based brachytherapy dosimetry with improved fitting functions: application to the selectSeed source.

    PubMed

    Juan-Senabre, Xavier J; Porras, Ignacio; Lallena, Antonio M

    2013-06-01

    A variation of TG-43 protocol for seeds with cylindrical symmetry aiming at a better description of the radial and anisotropy functions is proposed. The TG-43 two dimensional formalism is modified by introducing a new anisotropy function. Also new fitting functions that permit a more robust description of the radial and anisotropy functions than usual polynomials are studied. The relationship between the new anisotropy function and the anisotropy factor included in the one-dimensional TG-43 formalism is analyzed. The new formalism is tested for the (125)I Nucletron selectSeed brachytherapy source, using Monte Carlo simulations performed with PENELOPE. The goodness of the new parameterizations is discussed. The results obtained indicate that precise fits can be achieved, with a better description than that provided by previous parameterizations. Special care has been taken in the description and fitting of the anisotropy factor near the source. The modified formalism shows advantages with respect to the usual one in the description of the anisotropy functions. The new parameterizations obtained can be easily implemented in the clinical planning calculation systems, provided that the ratio between geometry factors is also modified according to the new dose rate expression. Copyright © 2012 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  6. GEMS X-ray Polarimeter Performance Simulations

    NASA Technical Reports Server (NTRS)

    Baumgartner, Wayne H.; Strohmayer, Tod; Kallman, Tim; Black, J. Kevin; Hill, Joanne; Swank, Jean

    2012-01-01

    The Gravity and Extreme Magnetism Small explorer (GEMS) is an X-ray polarization telescope selected as a NASA small explorer satellite mission. The X-ray Polarimeter on GEMS uses a Time Projection Chamber gas proportional counter to measure the polarization of astrophysical X-rays in the 2-10 keV band by sensing the direction of the track of the primary photoelectron excited by the incident X-ray. We have simulated the expected sensitivity of the polarimeter to polarized X-rays. We use the simulation package Penelope to model the physics of the interaction of the initial photoelectron with the detector gas and to determine the distribution of charge deposited in the detector volume. We then model the charge diffusion in the detector,and produce simulated track images. Within the track reconstruction algorithm we apply cuts on the track shape and focus on the initial photoelectron direction in order to maximize the overall sensitivity of the instrument, using this technique we have predicted instrument modulation factors nu(sub 100) for 100% polarized X-rays ranging from 10% to over 60% across the 2-10 keV X-ray band. We also discuss the simulation program used to develop and model some of the algorithms used for triggering, and energy measurement of events in the polarimeter.

  7. "How to stop choking to death": Rethinking lesbian separatism as a vibrant political theory and feminist practice.

    PubMed

    Enszer, Julie R

    2016-01-01

    In contemporary feminist discourses, lesbian separatism is often mocked. Whether blamed as a central reason for feminism's alleged failure or seen as an unrealistic, utopian vision, lesbian separatism is a maligned social and cultural formation. This article traces the intellectual roots of lesbian feminism from the early 1970s in The Furies and Radicalesbians through the work of Julia Penelope and Sarah Lucia Hoagland in the 1980s and 1990s, then considers four feminist and lesbian organizations that offer innovative engagements with lesbian separatism. Olivia Records operated as a separatist enterprise, producing and distributing womyn's music during the 1970s and 1980s. Two book distributors, Women in Distribution, which operated in the 1970s, and Diaspora Distribution, which operated in the 1980s, offer another approach to lesbian separatism as a form of economic and entrepreneurial engagement. Finally, Sinister Wisdom, a lesbian-feminist literary and arts journal, enacts a number of different forms of lesbian separatism during its forty-year history. These four examples demonstrate economic and cultural investments of lesbian separatism and situate its investments in larger visionary feminist projects. More than a rigid ideology, lesbian separatism operates as a feminist process, a method for living in the world.

  8. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballester, Facundo, E-mail: Facundo.Ballester@uv.es; Carlsson Tedgren, Åsa; Granero, Domingo

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual watermore » phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then compared against dose calculated using TG-43 and MBDCA methods. Results: TG-43 and PSS datasets were generated for the generic source, the PSS data for use with the ACE algorithm. The dose-rate constant values obtained from seven MC simulations, performed independently using different codes, were in excellent agreement, yielding an average of 1.1109 ± 0.0004 cGy/(h U) (k = 1, Type A uncertainty). MC calculated dose-rate distributions for the two plans were also found to be in excellent agreement, with differences within type A uncertainties. Differences between commercial MBDCA and MC results were test, position, and calculation parameter dependent. On average, however, these differences were within 1% for ACUROS and 2% for ACE at clinically relevant distances. Conclusions: A hypothetical, generic HDR {sup 192}Ir source was designed and implemented in two commercially available TPSs employing different MBDCAs. Reference dose distributions for this source were benchmarked and used for the evaluation of MBDCA calculations employing a virtual, cubic water phantom in the form of a CT DICOM image series. The implementation of a generic source of identical design in all TPSs using MBDCAs is an important step toward supporting univocal commissioning procedures and direct comparisons between TPSs.« less

  9. EDITORIAL: Special section: Selected papers from the Third European Workshop on Monte Carlo Treatment Planning (MCTP2012) Special section: Selected papers from the Third European Workshop on Monte Carlo Treatment Planning (MCTP2012)

    NASA Astrophysics Data System (ADS)

    Spezi, Emiliano; Leal, Antonio

    2013-04-01

    The Third European Workshop on Monte Carlo Treatment Planning (MCTP2012) was held from 15-18 May, 2012 in Seville, Spain. The event was organized by the Universidad de Sevilla with the support of the European Workgroup on Monte Carlo Treatment Planning (EWG-MCTP). MCTP2012 followed two successful meetings, one held in Ghent (Belgium) in 2006 (Reynaert 2007) and one in Cardiff (UK) in 2009 (Spezi 2010). The recurrence of these workshops together with successful events held in parallel by McGill University in Montreal (Seuntjens et al 2012), show consolidated interest from the scientific community in Monte Carlo (MC) treatment planning. The workshop was attended by a total of 90 participants, mainly coming from a medical physics background. A total of 48 oral presentations and 15 posters were delivered in specific scientific sessions including dosimetry, code development, imaging, modelling of photon and electron radiation transport, external beam radiation therapy, nuclear medicine, brachitherapy and hadrontherapy. A copy of the programme is available on the workshop's website (www.mctp2012.com). In this special section of Physics in Medicine and Biology we report six papers that were selected following the journal's rigorous peer review procedure. These papers actually provide a good cross section of the areas of application of MC in treatment planning that were discussed at MCTP2012. Czarnecki and Zink (2013) and Wagner et al (2013) present the results of their work in small field dosimetry. Czarnecki and Zink (2013) studied field size and detector dependent correction factors for diodes and ion chambers within a clinical 6MV photon beam generated by a Siemens linear accelerator. Their modelling work based on the BEAMnrc/EGSnrc codes and experimental measurements revealed that unshielded diodes were the best choice for small field dosimetry because of their independence from the electron beam spot size and correction factor close to unity. Wagner et al (2013) investigated the recombination effect on liquid ionization chambers for stereotactic radiotherapy, a field of increasing importance in external beam radiotherapy. They modelled both radiation source (Cyberknife unit) and detector with the BEAMnrc/EGSnrc codes and quantified the dependence of the response of this type of detectors on factors such as the volume effect and the electrode. They also recommended that these dependences be accounted for in measurements involving small fields. In the field of external beam radiotherapy, Chakarova et al (2013) showed how total body irradiation (TBI) could be improved by simulating patient treatments with MC. In particular, BEAMnrc/EGSnrc based simulations highlighted the importance of optimizing individual compensators for TBI treatments. In the same area of application, Mairani et al (2013) reported on a new tool for treatment planning in proton therapy based on the FLUKA MC code. The software, used to model both proton therapy beam and patient anatomy, supports single-field and multiple-field optimization and can be used to optimize physical and relative biological effectiveness (RBE)-weighted dose distribution, using both constant and variable RBE models. In the field of nuclear medicine Marcatili et al (2013) presented RAYDOSE, a Geant4-based code specifically developed for applications in molecular radiotherapy (MRT). RAYDOSE has been designed to work in MRT trials using sequential positron emission tomography (PET) or single-photon emission tomography (SPECT) imaging to model patient specific time-dependent metabolic uptake and to calculate the total 3D dose distribution. The code was validated through experimental measurements in homogeneous and heterogeneous phantoms. Finally, in the field of code development Miras et al (2013) reported on CloudMC, a Windows Azure-based application for the parallelization of MC calculations in a dynamic cluster environment. Although the performance of CloudMC has been tested with the PENELOPE MC code, the authors report that software has been designed in a way that it should be independent of the type of MC code, provided that simulation meets a number of operational criteria. We wish to thank Elekta/CMS Inc., the University of Seville, the Junta of Andalusia and the European Regional Development Fund for their financial support. We would like also to acknowledge the members of EWG-MCTP for their help in peer-reviewing all the abstracts, and all the invited speakers who kindly agreed to deliver keynote presentations in their area of expertise. A final word of thanks to our colleagues who worked on the reviewing process of the papers selected for this special section and to the IOP Publishing staff who made it possible. MCTP2012 was accredited by the European Federation of Organisations for Medical Physics as a CPD event for medical physicists. Emiliano Spezi and Antonio Leal Guest Editors References Chakarova R, Müntzing K, Krantz M, E Hedin E and Hertzman S 2013 Monte Carlo optimization of total body irradiation in a phantom and patient geometry Phys. Med. Biol. 58 2461-69 Czarnecki D and Zink K 2013 Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields Phys. Med. Biol. 58 2431-44 Mairani A, Böhlen T T, Schiavi A, Tessonnier T, Molinelli S, Brons S, Battistoni G, Parodi K and Patera V 2013 A Monte Carlo-based treatment planning tool for proton therapy Phys. Med. Biol. 58 2471-90 Marcatili S, Pettinato C, Daniels S, Lewis G, Edwards P, Fanti S and Spezi E 2013 Development and validation of RAYDOSE: a Geant4 based application for molecular radiotherapy Phys. Med. Biol. 58 2491-508 Miras H, Jiménez R, Miras C and Gomà C 2013 CloudMC: A cloud computing application for Monte Carlo simulation Phys. Med. Biol. 58 N125-33 Reynaert N 2007 First European Workshop on Monte Carlo Treatment Planning J. Phys.: Conf. Ser. 74 011001 Seuntjens J, Beaulieu L, El Naqa I and Després P 2012 Special section: Selected papers from the Fourth International Workshop on Recent Advances in Monte Carlo Techniques for Radiation Therapy Phys. Med. Biol. 57 (11) E01 Spezi E 2010 Special section: Selected papers from the Second European Workshop on Monte Carlo Treatment Planning (MCTP2009) Phys. Med. Biol. 55 (16) E01 Wagner A, Crop F, Lacornerie T, Vandevelde F and Reynaert N 2013 Use of a liquid ionization chamber for stereotactic radiotherapy dosimetry Phys. Med. Biol. 58 2445-59

  10. Experimental Evidence Shows the Importance of Behavioural Plasticity and Body Size under Competition in Waterfowl

    PubMed Central

    Zhang, Yong; Prins, Herbert H. T.; Versluijs, Martijn; Wessels, Rick; Cao, Lei; de Boer, Willem Frederik

    2016-01-01

    When differently sized species feed on the same resources, interference competition may occur, which may negatively affect their food intake rate. It is expected that competition between species also alters behaviour and feeding patch selection. To assess these changes in behaviour and patch selection, we applied an experimental approach using captive birds of three differently sized Anatidae species: wigeon (Anas penelope) (~600 g), swan goose (Anser cygnoides) (~2700 g) and bean goose (Anser fabalis) (~3200 g). We quantified the functional response for each species and then recorded their behaviour and patch selection with and without potential competitors, using different species combinations. Our results showed that all three species acquired the highest nitrogen intake at relatively tall swards (6, 9 cm) when foraging in single species flocks in the functional response experiment. Goose species were offered foraging patches differing in sward height with and without competitors, and we tested for the effect of competition on foraging behaviour. The mean percentage of time spent feeding and being vigilant did not change under competition for all species. However, all species utilized strategies that increased their peck rate on patches across different sward heights, resulting in the same instantaneous and nitrogen intake rate. Our results suggest that variation in peck rate over different swards height permits Anatidae herbivores to compensate for the loss of intake under competition, illustrating the importance of behavioural plasticity in heterogeneous environments when competing with other species for resources. PMID:27727315

  11. An atlas-based organ dose estimator for tomosynthesis and radiography

    NASA Astrophysics Data System (ADS)

    Hoye, Jocelyn; Zhang, Yakun; Agasthya, Greeshma; Sturgeon, Greg; Kapadia, Anuj; Segars, W. Paul; Samei, Ehsan

    2017-03-01

    The purpose of this study was to provide patient-specific organ dose estimation based on an atlas of human models for twenty tomosynthesis and radiography protocols. The study utilized a library of 54 adult computational phantoms (age: 18-78 years, weight 52-117 kg) and a validated Monte-Carlo simulation (PENELOPE) of a tomosynthesis and radiography system to estimate organ dose. Positioning of patient anatomy was based on radiographic positioning handbooks. The field of view for each exam was calculated to include relevant organs per protocol. Through simulations, the energy deposited in each organ was binned to estimate normalized organ doses into a reference database. The database can be used as the basis to devise a dose calculator to predict patient-specific organ dose values based on kVp, mAs, exposure in air, and patient habitus for a given protocol. As an example of the utility of this tool, dose to an organ was studied as a function of average patient thickness in the field of view for a given exam and as a function of Body Mass Index (BMI). For tomosynthesis, organ doses can also be studied as a function of x-ray tube position. This work developed comprehensive information for organ dose dependencies across tomosynthesis and radiography. There was a general exponential decrease dependency with increasing patient size that is highly protocol dependent. There was a wide range of variability in organ dose across the patient population, which needs to be incorporated in the metrology of organ dose.

  12. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y.

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct featuresmore » of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two different media. Optimization of the simulation parameters and the use of VR techniques saved a significant amount of computation time. Finally, parallelization of the simulations improved even further the calculation time, which reached 1 day for a typical irradiation case envisaged in the forthcoming clinical trials in MRT. An example of MRT treatment in a dog's head is presented, showing the performance of the calculation engine. Conclusions: The development of the first MC-based calculation engine for the future TPS devoted to MRT has been accomplished. This will constitute an essential tool for the future clinical trials on pets at the ESRF. The MC engine is able to calculate dose distributions in micrometer-sized bins in complex voxelized CT structures in a reasonable amount of time. Minimization of the computation time by using several approaches has led to timings that are adequate for pet radiotherapy at synchrotron facilities. The next step will consist in its integration into a user-friendly graphical front-end.« less

  13. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.

    PubMed

    Martinez-Rovira, I; Sempau, J; Prezado, Y

    2012-05-01

    Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two different media. Optimization of the simulation parameters and the use of VR techniques saved a significant amount of computation time. Finally, parallelization of the simulations improved even further the calculation time, which reached 1 day for a typical irradiation case envisaged in the forthcoming clinical trials in MRT. An example of MRT treatment in a dog's head is presented, showing the performance of the calculation engine. The development of the first MC-based calculation engine for the future TPS devoted to MRT has been accomplished. This will constitute an essential tool for the future clinical trials on pets at the ESRF. The MC engine is able to calculate dose distributions in micrometer-sized bins in complex voxelized CT structures in a reasonable amount of time. Minimization of the computation time by using several approaches has led to timings that are adequate for pet radiotherapy at synchrotron facilities. The next step will consist in its integration into a user-friendly graphical front-end.

  14. Monte Carlo study for designing a dedicated “D”-shaped collimator used in the external beam radiotherapy of retinoblastoma patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayorga, P. A.; Departamento de Física Atómica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada; Brualla, L.

    2014-01-15

    Purpose: Retinoblastoma is the most common intraocular malignancy in the early childhood. Patients treated with external beam radiotherapy respond very well to the treatment. However, owing to the genotype of children suffering hereditary retinoblastoma, the risk of secondary radio-induced malignancies is high. The University Hospital of Essen has successfully treated these patients on a daily basis during nearly 30 years using a dedicated “D”-shaped collimator. The use of this collimator that delivers a highly conformed small radiation field, gives very good results in the control of the primary tumor as well as in preserving visual function, while it avoids themore » devastating side effects of deformation of midface bones. The purpose of the present paper is to propose a modified version of the “D”-shaped collimator that reduces even further the irradiation field with the scope to reduce as well the risk of radio-induced secondary malignancies. Concurrently, the new dedicated “D”-shaped collimator must be easier to build and at the same time produces dose distributions that only differ on the field size with respect to the dose distributions obtained by the current collimator in use. The scope of the former requirement is to facilitate the employment of the authors' irradiation technique both at the authors' and at other hospitals. The fulfillment of the latter allows the authors to continue using the clinical experience gained in more than 30 years. Methods: The Monte Carlo codePENELOPE was used to study the effect that the different structural elements of the dedicated “D”-shaped collimator have on the absorbed dose distribution. To perform this study, the radiation transport through a Varian Clinac 2100 C/D operating at 6 MV was simulated in order to tally phase-space files which were then used as radiation sources to simulate the considered collimators and the subsequent dose distributions. With the knowledge gained in that study, a new, simpler, “D”-shaped collimator is proposed. Results: The proposed collimator delivers a dose distribution which is 2.4 cm wide along the inferior-superior direction of the eyeball. This width is 0.3 cm narrower than that of the dose distribution obtained with the collimator currently in clinical use. The other relevant characteristics of the dose distribution obtained with the new collimator, namely, depth doses at clinically relevant positions, penumbrae width, and shape of the lateral profiles, are statistically compatible with the results obtained for the collimator currently in use. Conclusions: The smaller field size delivered by the proposed collimator still fully covers the planning target volume with at least 95% of the maximum dose at a depth of 2 cm and provides a safety margin of 0.2 cm, so ensuring an adequate treatment while reducing the irradiated volume.« less

  15. Monte Carlo simulation of TrueBeam flattening-filter-free beams using varian phase-space files: comparison with experimental data.

    PubMed

    Belosi, Maria F; Rodriguez, Miguel; Fogliata, Antonella; Cozzi, Luca; Sempau, Josep; Clivio, Alessandro; Nicolini, Giorgia; Vanetti, Eugenio; Krauss, Harald; Khamphan, Catherine; Fenoglietto, Pascal; Puxeu, Josep; Fedele, David; Mancosu, Pietro; Brualla, Lorenzo

    2014-05-01

    Phase-space files for Monte Carlo simulation of the Varian TrueBeam beams have been made available by Varian. The aim of this study is to evaluate the accuracy of the distributed phase-space files for flattening filter free (FFF) beams, against experimental measurements from ten TrueBeam Linacs. The phase-space files have been used as input in PRIMO, a recently released Monte Carlo program based on the PENELOPE code. Simulations of 6 and 10 MV FFF were computed in a virtual water phantom for field sizes 3 × 3, 6 × 6, and 10 × 10 cm(2) using 1 × 1 × 1 mm(3) voxels and for 20 × 20 and 40 × 40 cm(2) with 2 × 2 × 2 mm(3) voxels. The particles contained in the initial phase-space files were transported downstream to a plane just above the phantom surface, where a subsequent phase-space file was tallied. Particles were transported downstream this second phase-space file to the water phantom. Experimental data consisted of depth doses and profiles at five different depths acquired at SSD = 100 cm (seven datasets) and SSD = 90 cm (three datasets). Simulations and experimental data were compared in terms of dose difference. Gamma analysis was also performed using 1%, 1 mm and 2%, 2 mm criteria of dose-difference and distance-to-agreement, respectively. Additionally, the parameters characterizing the dose profiles of unflattened beams were evaluated for both measurements and simulations. Analysis of depth dose curves showed that dose differences increased with increasing field size and depth; this effect might be partly motivated due to an underestimation of the primary beam energy used to compute the phase-space files. Average dose differences reached 1% for the largest field size. Lateral profiles presented dose differences well within 1% for fields up to 20 × 20 cm(2), while the discrepancy increased toward 2% in the 40 × 40 cm(2) cases. Gamma analysis resulted in an agreement of 100% when a 2%, 2 mm criterion was used, with the only exception of the 40 × 40 cm(2) field (∼95% agreement). With the more stringent criteria of 1%, 1 mm, the agreement reduced to almost 95% for field sizes up to 10 × 10 cm(2), worse for larger fields. Unflatness and slope FFF-specific parameters are in line with the possible energy underestimation of the simulated results relative to experimental data. The agreement between Monte Carlo simulations and experimental data proved that the evaluated Varian phase-space files for FFF beams from TrueBeam can be used as radiation sources for accurate Monte Carlo dose estimation, especially for field sizes up to 10 × 10 cm(2), that is the range of field sizes mostly used in combination to the FFF, high dose rate beams.

  16. Correction factors for ionization chamber measurements with the ‘Valencia’ and ‘large field Valencia’ brachytherapy applicators

    NASA Astrophysics Data System (ADS)

    Gimenez-Alventosa, V.; Gimenez, V.; Ballester, F.; Vijande, J.; Andreo, P.

    2018-06-01

    Treatment of small skin lesions using HDR brachytherapy applicators is a widely used technique. The shielded applicators currently available in clinical practice are based on a tungsten-alloy cup that collimates the source-emitted radiation into a small region, hence protecting nearby tissues. The goal of this manuscript is to evaluate the correction factors required for dose measurements with a plane-parallel ionization chamber typically used in clinical brachytherapy for the ‘Valencia’ and ‘large field Valencia’ shielded applicators. Monte Carlo simulations have been performed using the PENELOPE-2014 system to determine the absorbed dose deposited in a water phantom and in the chamber active volume with a Type A uncertainty of the order of 0.1%. The average energies of the photon spectra arriving at the surface of the water phantom differ by approximately 10%, being 384 keV for the ‘Valencia’ and 343 keV for the ‘large field Valencia’. The ionization chamber correction factors have been obtained for both applicators using three methods, their values depending on the applicator being considered. Using a depth-independent global chamber perturbation correction factor and no shift of the effective point of measurement yields depth-dose differences of up to 1% for the ‘Valencia’ applicator. Calculations using a depth-dependent global perturbation factor, or a shift of the effective point of measurement combined with a constant partial perturbation factor, result in differences of about 0.1% for both applicators. The results emphasize the relevance of carrying out detailed Monte Carlo studies for each shielded brachytherapy applicator and ionization chamber.

  17. The sustainability of subsistence hunting by Matsigenka native communities in Manu National Park, Peru.

    PubMed

    Ohl-Schacherer, Julia; Shepard, Glenn H; Kaplan, Hillard; Peres, Carlos A; Levi, Taal; Yu, Douglas W

    2007-10-01

    The presence of indigenous people in tropical parks has fueled a debate over whether people in parks are conservation allies or direct threats to biodiversity. A well-known example is the Matsigenka (or Machiguenga) population residing in Manu National Park in Peruvian Amazonia. Because the exploitation of wild meat (or bushmeat), especially large vertebrates, represents the most significant internal threat to biodiversity in Manu, we analyzed 1 year of participatory monitoring of game offtake in two Matsigenka native communities within Manu Park (102,397 consumer days and 2,089 prey items). We used the Robinson and Redford (1991) index to identify five prey species hunted at or above maximum sustainable yield within the approximately 150-km(2) core hunting zones of the two communities: woolly monkey (Lagothrix lagotricha), spider monkey (Ateles chamek), white-lipped peccary (Tayassu pecari), Razor-billed Currasow (Mitu tuberosa), and Spix's Guan (Penelope jacquacu). There was little or no evidence that any of these five species has become depleted, other than locally, despite a near doubling of the human population since 1988. Hunter-prey profiles have not changed since 1988, and there has been little change in per capita consumption rates or mean prey weights. The current offtake by the Matsigenka appears to be sustainable, apparently due to source-sink dynamics. Source-sink dynamics imply that even with continued human population growth within a settlement, offtake for each hunted species will eventually reach an asymptote. Thus, stabilizing the Matsigenka population around existing settlements should be a primary policy goal for Manu Park.

  18. Technical Note: A simulation study on the feasibility of radiotherapy dose enhancement with calcium tungstate and hafnium oxide nano- and microparticles.

    PubMed

    Sherck, Nicholas J; Won, You-Yeon

    2017-12-01

    To assess the radiotherapy dose enhancement (RDE) potential of calcium tungstate (CaWO 4 ) and hafnium oxide (HfO 2 ) nano- and microparticles (NPs). A Monte Carlo simulation study was conducted to gauge their respective RDE potentials relative to that of the broadly studied gold (Au) NP. The study was warranted due to the promising clinical and preclinical studies involving both CaWO 4 and HfO 2 NPs as RDE agents in the treatment of various types of cancers. The study provides a baseline RDE to which future experimental RDE trends can be compared to. All three materials were investigated in silico with the software Penetration and Energy Loss of Positrons and Electrons (PENELOPE 2014) developed by Francesc Salvat and distributed in the United States by the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory. The work utilizes the extensively studied Au NP as the "gold standard" for a baseline. The key metric used in the evaluation of the materials was the local dose enhancement factor (DEF loc ). An additional metric used, termed the relative enhancement ratio (RER), evaluates material performance at the same mass concentrations. The results of the study indicate that Au has the strongest RDE potential using the DEF loc metric. HfO 2 and CaWO 4 both underperformed relative to Au with lower DEF loc of 2-3 × and 4-100 ×, respectively. The computational investigation predicts the RDE performance ranking to be: Au > HfO 2 > CaWO 4 . © 2017 American Association of Physicists in Medicine.

  19. Screening of repetitive motifs inside the genome of the flat oyster (Ostrea edulis): Transposable elements and short tandem repeats.

    PubMed

    Vera, Manuel; Bello, Xabier; Álvarez-Dios, Jose-Antonio; Pardo, Belen G; Sánchez, Laura; Carlsson, Jens; Carlsson, Jeanette E L; Bartolomé, Carolina; Maside, Xulio; Martinez, Paulino

    2015-12-01

    The flat oyster (Ostrea edulis) is one of the most appreciated molluscs in Europe, but its production has been greatly reduced by the parasite Bonamia ostreae. Here, new generation genomic resources were used to analyse the repetitive fraction of the oyster genome, with the aim of developing molecular markers to face this main oyster production challenge. The resulting oyster database, consists of two sets of 10,318 and 7159 unique contigs (4.8 Mbp and 6.8 Mbp in total length) representing the oyster's genome (WG) and haemocyte transcriptome (HT), respectively. A total of 1083 sequences were identified as TE-derived, which corresponded to 4.0% of WG and 1.1% of HT. They were clustered into 142 homology groups, most of which were assigned to the Penelope order of retrotransposons, and to the Helitron and TIR DNA-transposons. Simple repeats and rRNA pseudogenes, also made a significant contribution to the oyster's genome (0.5% and 0.3% of WG and HT, respectively).The most frequent short tandem repeats identified in WG were tetranucleotide motifs while trinucleotide motifs were in HT. Forty identified microsatellite loci, 20 from each database, were selected for technical validation. Success was much lower among WG than HT microsatellites (15% vs 55%), which could reflect higher variation in anonymous regions interfering with primer annealing. All microsatellites developed adjusted to Hardy-Weinberg proportions and represent a useful tool to support future breeding programmes and to manage genetic resources of natural flat oyster beds. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Lack of virological and serological evidence for continued circulation of highly pathogenic avian influenza H5N8 virus in wild birds in the Netherlands, 14 November 2014 to 31 January 2016

    PubMed Central

    Poen, Marjolein J; Verhagen, Josanne H; Manvell, Ruth J; Brown, Ian; Bestebroer, Theo M; van der Vliet, Stefan; Vuong, Oanh; Scheuer, Rachel D; van der Jeugd, Henk P; Nolet, Bart A; Kleyheeg, Erik; Müskens, Gerhard J D M; Majoor, Frank A; Grund, Christian; Fouchier, Ron A M

    2016-01-01

    In 2014, H5N8 clade 2.3.4.4 highly pathogenic avian influenza (HPAI) viruses of the A/Goose/Guangdong/1/1996 lineage emerged in poultry and wild birds in Asia, Europe and North America. Here, wild birds were extensively investigated in the Netherlands for HPAI H5N8 virus (real-time polymerase chain reaction targeting the matrix and H5 gene) and antibody detection (haemagglutination inhibition and virus neutralisation assays) before, during and after the first virus detection in Europe in late 2014. Between 21 February 2015 and 31 January 2016, 7,337 bird samples were tested for the virus. One HPAI H5N8 virus-infected Eurasian wigeon (Anas penelope) sampled on 25 February 2015 was detected. Serological assays were performed on 1,443 samples, including 149 collected between 2007 and 2013, 945 between 14 November 2014 and 13 May 2015, and 349 between 1 September and 31 December 2015. Antibodies specific for HPAI H5 clade 2.3.4.4 were absent in wild bird sera obtained before 2014 and present in sera collected during and after the HPAI H5N8 emergence in Europe, with antibody incidence declining after the 2014/15 winter. Our results indicate that the HPAI H5N8 virus has not continued to circulate extensively in wild bird populations since the 2014/15 winter and that independent maintenance of the virus in these populations appears unlikely. PMID:27684783

  1. Feasibility of dose enhancement assessment: Preliminary results by means of Gd-infused polymer gel dosimeter and Monte Carlo study.

    PubMed

    Santibáñez, M; Guillen, Y; Chacón, D; Figueroa, R G; Valente, M

    2018-04-11

    This work reports the experimental development of an integral Gd-infused dosimeter suitable for Gd dose enhancement assessment along with Monte Carlo simulations applied to determine the dose enhancement by radioactive and X-ray sources of interest in conventional and electronic brachytherapy. In this context, capability to elaborate a stable and reliable Gd-infused dosimeter was the first goal aimed at direct and accurate measurements of dose enhancement due to Gd presence. Dose-response was characterized for standard and Gd-infused PAGAT polymer gel dosimeters by means of optical transmission/absorbance. The developed Gd-infused PAGAT dosimeters demonstrated to be stable presenting similar dose-response as standard PAGAT within a linear trend up to 13 Gy along with good post-irradiation readout stability verified at 24 and 48 h. Additionally, dose enhancement was evaluated for Gd-infused PAGAT dosimeters by means of Monte Carlo (PENELOPE) simulations considering scenarios for isotopic and X-ray generator sources. The obtained results demonstrated the feasibility of obtaining a maximum enhancement around of (14 ± 1)% for 192 Ir source and an average enhancement of (70 ± 13)% for 241 Am. However, dose enhancement up to (267 ± 18)% may be achieved if suitable filtering is added to the 241 Am source. On the other hand, optimized X-ray spectra may attain dose enhancements up to (253 ± 22) %, which constitutes a promising future alternative for replacing radioactive sources by implementing electronic brachytherapy achieving high dose levels. Copyright © 2018. Published by Elsevier Ltd.

  2. Dosimetric effects of polyethylene glycol surface coatings on gold nanoparticle radiosensitization

    NASA Astrophysics Data System (ADS)

    Koger, B.; Kirkby, C.

    2017-11-01

    One of the main appeals of using gold nanoparticles (GNPs) as radiosensitizers is that their surface coatings can be altered to manipulate their pharmacokinetic properties. However, Monte Carlo studies of GNP dosimetry tend to neglect these coatings, potentially changing the dosimetric results. This study quantifies the dosimetric effects of including a polyethylene glycol (PEG) surface coating on GNPs over both nanoscopic and microscopic ranges. Two dosimetric scales were explored using PENELOPE Monte Carlo simulations. In microscopic simulations, 500-1000 GNPs, with and without coatings, were placed in cavities of side lengths 0.8-4 µm, and the reduction of dose deposited to surrounding medium within these volumes due to the coating was quantified. Including PEG surface coatings of up to 20 nm thickness resulted in reductions of up to 7.5%, 4.0%, and 2.0% for GNP diameters of 10, 20, and 50 nm, respectively. Nanoscopic simulations observed the dose falloff in the first 500 nm surrounding a single GNP both with and without surface coatings of various thicknesses. Over the first 500 nm surrounding a single GNP, the presence of a PEG surface coating reduced dose by 5-26%, 8-28%, 8-30%, and 8-34% for 2, 10, 20, and 50 nm diameter GNPs, respectively, for various energies and coating thicknesses. Reductions in dose enhancement due to the inclusion of a GNP surface coating are non-negligible and should be taken into consideration when investigating GNP dose enhancement. Further studies should be carried out to investigate the biological effects of these coatings.

  3. Backscatter factors and mass energy-absorption coefficient ratios for diagnostic radiology dosimetry

    NASA Astrophysics Data System (ADS)

    Benmakhlouf, Hamza; Bouchard, Hugo; Fransson, Annette; Andreo, Pedro

    2011-11-01

    Backscatter factors, B, and mass energy-absorption coefficient ratios, (μen/ρ)w, air, for the determination of the surface dose in diagnostic radiology were calculated using Monte Carlo simulations. The main purpose was to extend the range of available data to qualities used in modern x-ray techniques, particularly for interventional radiology. A comprehensive database for mono-energetic photons between 4 and 150 keV and different field sizes was created for a 15 cm thick water phantom. Backscattered spectra were calculated with the PENELOPE Monte Carlo system, scoring track-length fluence differential in energy with negligible statistical uncertainty; using the Monte Carlo computed spectra, B factors and (μen/ρ)w, air were then calculated numerically for each energy. Weighted averaging procedures were subsequently used to convolve incident clinical spectra with mono-energetic data. The method was benchmarked against full Monte Carlo calculations of incident clinical spectra obtaining differences within 0.3-0.6%. The technique used enables the calculation of B and (μen/ρ)w, air for any incident spectrum without further time-consuming Monte Carlo simulations. The adequacy of the extended dosimetry data to a broader range of clinical qualities than those currently available, while keeping consistency with existing data, was confirmed through detailed comparisons. Mono-energetic and spectra-averaged values were compared with published data, including those in ICRU Report 74 and IAEA TRS-457, finding average differences of 0.6%. Results are provided in comprehensive tables appropriated for clinical use. Additional qualities can easily be calculated using a designed GUI interface in conjunction with software to generate incident photon spectra.

  4. Monte Carlo investigation of backscatter factors for skin dose determination in interventional neuroradiology procedures

    NASA Astrophysics Data System (ADS)

    Omar, Artur; Benmakhlouf, Hamza; Marteinsdottir, Maria; Bujila, Robert; Nowik, Patrik; Andreo, Pedro

    2014-03-01

    Complex interventional and diagnostic x-ray angiographic (XA) procedures may yield patient skin doses exceeding the threshold for radiation induced skin injuries. Skin dose is conventionally determined by converting the incident air kerma free-in-air into entrance surface air kerma, a process that requires the use of backscatter factors. Subsequently, the entrance surface air kerma is converted into skin kerma using mass energy-absorption coefficient ratios tissue-to-air, which for the photon energies used in XA is identical to the skin dose. The purpose of this work was to investigate how the cranial bone affects backscatter factors for the dosimetry of interventional neuroradiology procedures. The PENELOPE Monte Carlo system was used to calculate backscatter factors at the entrance surface of a spherical and a cubic water phantom that includes a cranial bone layer. The simulations were performed for different clinical x-ray spectra, field sizes, and thicknesses of the bone layer. The results show a reduction of up to 15% when a cranial bone layer is included in the simulations, compared with conventional backscatter factors calculated for a homogeneous water phantom. The reduction increases for thicker bone layers, softer incident beam qualities, and larger field sizes, indicating that, due to the increased photoelectric crosssection of cranial bone compared to water, the bone layer acts primarily as an absorber of low-energy photons. For neurointerventional radiology procedures, backscatter factors calculated at the entrance surface of a water phantom containing a cranial bone layer increase the accuracy of the skin dose determination.

  5. Wild ducks excrete highly pathogenic avian influenza virus H5N8 (2014-2015) without clinical or pathological evidence of disease.

    PubMed

    van den Brand, Judith M A; Verhagen, Josanne H; Veldhuis Kroeze, Edwin J B; van de Bildt, Marco W G; Bodewes, Rogier; Herfst, Sander; Richard, Mathilde; Lexmond, Pascal; Bestebroer, Theo M; Fouchier, Ron A M; Kuiken, Thijs

    2018-04-18

    Highly pathogenic avian influenza (HPAI) is essentially a poultry disease. Wild birds have traditionally not been involved in its spread, but the epidemiology of HPAI has changed in recent years. After its emergence in southeastern Asia in 1996, H5 HPAI virus of the Goose/Guangdong lineage has evolved into several sub-lineages, some of which have spread over thousands of kilometers via long-distance migration of wild waterbirds. In order to determine whether the virus is adapting to wild waterbirds, we experimentally inoculated the HPAI H5N8 virus clade 2.3.4.4 group A from 2014 into four key waterbird species-Eurasian wigeon (Anas penelope), common teal (Anas crecca), mallard (Anas platyrhynchos), and common pochard (Aythya ferina)-and compared virus excretion and disease severity with historical data of the HPAI H5N1 virus infection from 2005 in the same four species. Our results showed that excretion was highest in Eurasian wigeons for the 2014 virus, whereas excretion was highest in common pochards and mallards for the 2005 virus. The 2014 virus infection was subclinical in all four waterbird species, while the 2005 virus caused clinical disease and pathological changes in over 50% of the common pochards. In chickens, the 2014 virus infection caused systemic disease and high mortality, similar to the 2005 virus. In conclusion, the evidence was strongest for Eurasian wigeons as long-distance vectors for HPAI H5N8 virus from 2014. The implications of the switch in species-specific virus excretion and decreased disease severity may be that the HPAI H5 virus more easily spreads in the wild-waterbird population.

  6. Giant Reverse Transcriptase-Encoding Transposable Elements at Telomeres.

    PubMed

    Arkhipova, Irina R; Yushenova, Irina A; Rodriguez, Fernando

    2017-09-01

    Transposable elements are omnipresent in eukaryotic genomes and have a profound impact on chromosome structure, function and evolution. Their structural and functional diversity is thought to be reasonably well-understood, especially in retroelements, which transpose via an RNA intermediate copied into cDNA by the element-encoded reverse transcriptase, and are characterized by a compact structure. Here, we report a novel type of expandable eukaryotic retroelements, which we call Terminons. These elements can attach to G-rich telomeric repeat overhangs at the chromosome ends, in a process apparently facilitated by complementary C-rich repeats at the 3'-end of the RNA template immediately adjacent to a hammerhead ribozyme motif. Terminon units, which can exceed 40 kb in length, display an unusually complex and diverse structure, and can form very long chains, with host genes often captured between units. As the principal polymerizing component, Terminons contain Athena reverse transcriptases previously described in bdelloid rotifers and belonging to the enigmatic group of Penelope-like elements, but can additionally accumulate multiple cooriented ORFs, including DEDDy 3'-exonucleases, GDSL esterases/lipases, GIY-YIG-like endonucleases, rolling-circle replication initiator (Rep) proteins, and putatively structural ORFs with coiled-coil motifs and transmembrane domains. The extraordinary length and complexity of Terminons and the high degree of interfamily variability in their ORF content challenge the current views on the structural organization of eukaryotic retroelements, and highlight their possible connections with the viral world and the implications for the elevated frequency of gene transfer. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Lack of virological and serological evidence for continued circulation of highly pathogenic avian influenza H5N8 virus in wild birds in the Netherlands, 14 November 2014 to 31 January 2016.

    PubMed

    Poen, Marjolein J; Verhagen, Josanne H; Manvell, Ruth J; Brown, Ian; Bestebroer, Theo M; van der Vliet, Stefan; Vuong, Oanh; Scheuer, Rachel D; van der Jeugd, Henk P; Nolet, Bart A; Kleyheeg, Erik; Müskens, Gerhard J D M; Majoor, Frank A; Grund, Christian; Fouchier, Ron A M

    2016-09-22

    In 2014, H5N8 clade 2.3.4.4 highly pathogenic avian influenza (HPAI) viruses of the A/Goose/Guangdong/1/1996 lineage emerged in poultry and wild birds in Asia, Europe and North America. Here, wild birds were extensively investigated in the Netherlands for HPAI H5N8 virus (real-time polymerase chain reaction targeting the matrix and H5 gene) and antibody detection (haemagglutination inhibition and virus neutralisation assays) before, during and after the first virus detection in Europe in late 2014. Between 21 February 2015 and 31 January 2016, 7,337 bird samples were tested for the virus. One HPAI H5N8 virus-infected Eurasian wigeon (Anas penelope) sampled on 25 February 2015 was detected. Serological assays were performed on 1,443 samples, including 149 collected between 2007 and 2013, 945 between 14 November 2014 and 13 May 2015, and 349 between 1 September and 31 December 2015. Antibodies specific for HPAI H5 clade 2.3.4.4 were absent in wild bird sera obtained before 2014 and present in sera collected during and after the HPAI H5N8 emergence in Europe, with antibody incidence declining after the 2014/15 winter. Our results indicate that the HPAI H5N8 virus has not continued to circulate extensively in wild bird populations since the 2014/15 winter and that independent maintenance of the virus in these populations appears unlikely. This article is copyright of The Authors, 2016.

  8. Monte Carlo simulation of TrueBeam flattening-filter-free beams using Varian phase-space files: Comparison with experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belosi, Maria F.; Fogliata, Antonella, E-mail: antonella.fogliata-cozzi@eoc.ch, E-mail: afc@iosi.ch; Cozzi, Luca

    2014-05-15

    Purpose: Phase-space files for Monte Carlo simulation of the Varian TrueBeam beams have been made available by Varian. The aim of this study is to evaluate the accuracy of the distributed phase-space files for flattening filter free (FFF) beams, against experimental measurements from ten TrueBeam Linacs. Methods: The phase-space files have been used as input in PRIMO, a recently released Monte Carlo program based on thePENELOPE code. Simulations of 6 and 10 MV FFF were computed in a virtual water phantom for field sizes 3 × 3, 6 × 6, and 10 × 10 cm{sup 2} using 1 × 1more » × 1 mm{sup 3} voxels and for 20 × 20 and 40 × 40 cm{sup 2} with 2 × 2 × 2 mm{sup 3} voxels. The particles contained in the initial phase-space files were transported downstream to a plane just above the phantom surface, where a subsequent phase-space file was tallied. Particles were transported downstream this second phase-space file to the water phantom. Experimental data consisted of depth doses and profiles at five different depths acquired at SSD = 100 cm (seven datasets) and SSD = 90 cm (three datasets). Simulations and experimental data were compared in terms of dose difference. Gamma analysis was also performed using 1%, 1 mm and 2%, 2 mm criteria of dose-difference and distance-to-agreement, respectively. Additionally, the parameters characterizing the dose profiles of unflattened beams were evaluated for both measurements and simulations. Results: Analysis of depth dose curves showed that dose differences increased with increasing field size and depth; this effect might be partly motivated due to an underestimation of the primary beam energy used to compute the phase-space files. Average dose differences reached 1% for the largest field size. Lateral profiles presented dose differences well within 1% for fields up to 20 × 20 cm{sup 2}, while the discrepancy increased toward 2% in the 40 × 40 cm{sup 2} cases. Gamma analysis resulted in an agreement of 100% when a 2%, 2 mm criterion was used, with the only exception of the 40 × 40 cm{sup 2} field (∼95% agreement). With the more stringent criteria of 1%, 1 mm, the agreement reduced to almost 95% for field sizes up to 10 × 10 cm{sup 2}, worse for larger fields. Unflatness and slope FFF-specific parameters are in line with the possible energy underestimation of the simulated results relative to experimental data. Conclusions: The agreement between Monte Carlo simulations and experimental data proved that the evaluated Varian phase-space files for FFF beams from TrueBeam can be used as radiation sources for accurate Monte Carlo dose estimation, especially for field sizes up to 10 × 10 cm{sup 2}, that is the range of field sizes mostly used in combination to the FFF, high dose rate beams.« less

  9. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  10. Bandwidth efficient coding for satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.

    1992-01-01

    An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.

  11. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  13. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  14. Trace-shortened Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Solomon, G.

    1994-01-01

    Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.

  15. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  16. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  17. Performance Analysis of New Binary User Codes for DS-CDMA Communication

    NASA Astrophysics Data System (ADS)

    Usha, Kamle; Jaya Sankar, Kottareddygari

    2016-03-01

    This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.

  18. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  19. Multilevel Concatenated Block Modulation Codes for the Frequency Non-selective Rayleigh Fading Channel

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun

    1996-01-01

    This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.

  20. Unaligned instruction relocation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.

    In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less

  1. Unaligned instruction relocation

    DOEpatents

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.

    2018-01-23

    In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.

  2. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  3. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  4. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.

  5. Ensemble coding of face identity is not independent of the coding of individual identity.

    PubMed

    Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina

    2018-06-01

    Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.

  6. Performance and structure of single-mode bosonic codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    2018-03-01

    The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.

  7. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  8. Determination of the KQclinfclin,Qmsr fmsr correction factors for detectors used with an 800 MU/min CyberKnife(®) system equipped with fixed collimators and a study of detector response to small photon beams using a Monte Carlo method.

    PubMed

    Moignier, C; Huet, C; Makovicka, L

    2014-07-01

    In a previous work, output ratio (ORdet) measurements were performed for the 800 MU/min CyberKnife(®) at the Oscar Lambret Center (COL, France) using several commercially available detectors as well as using two passive dosimeters (EBT2 radiochromic film and micro-LiF TLD-700). The primary aim of the present work was to determine by Monte Carlo calculations the output factor in water (OFMC,w) and the [Formula: see text] correction factors. The secondary aim was to study the detector response in small beams using Monte Carlo simulation. The LINAC head of the CyberKnife(®) was modeled using the PENELOPE Monte Carlo code system. The primary electron beam was modeled using a monoenergetic source with a radial gaussian distribution. The model was adjusted by comparisons between calculated and measured lateral profiles and tissue-phantom ratios obtained with the largest field. In addition, the PTW 60016 and 60017 diodes, PTW 60003 diamond, and micro-LiF were modeled. Output ratios with modeled detectors (ORMC,det) and OFMC,w were calculated and compared to measurements, in order to validate the model for smallest fields and to calculate [Formula: see text] correction factors, respectively. For the study of the influence of detector characteristics on their response in small beams; first, the impact of the atomic composition and the mass density of silicon, LiF, and diamond materials were investigated; second, the material, the volume averaging, and the coating effects of detecting material on the detector responses were estimated. Finally, the influence of the size of silicon chip on diode response was investigated. Looking at measurement ratios (uncorrected output factors) compared to the OFMC,w, the PTW 60016, 60017 and Sun Nuclear EDGE diodes systematically over-responded (about +6% for the 5 mm field), whereas the PTW 31014 Pinpoint chamber systematically under-responded (about -12% for the 5 mm field). ORdet measured with the SFD diode and PTW 60003 diamond detectors were in good agreement with OFMC,w except for the 5 mm field size (about -7.5% for the diamond and +3% for the SFD). A good agreement with OFMC,w was obtained with the EBT2 film and micro-LiF dosimeters (deviation less than 1.4% for all fields investigated). [Formula: see text] correction factors for several detectors used in this work have been calculated. The impact of atomic composition on the dosimetric response of detectors was found to be insignificant, unlike the mass density and size of the detecting material. The results obtained with the passive dosimeters showed that they can be used for small beam OF measurements without correction factors. The study of detector response showed that ORdet is depending on the mass density, the volume averaging, and the coating effects of the detecting material. Each effect was quantified for the PTW 60016 and 60017 diodes, the micro-LiF, and the PTW 60003 diamond detectors. None of the active detectors used in this work can be recommended as a reference for small field dosimetry, but an improved diode detector with a smaller silicon chip coated with tissue-equivalent material is anticipated (by simulation) to be a reliable small field dosimetric detector in a nonequilibrium field.

  9. Spectral distribution of particle fluence in small field detectors and its implication on small field dosimetry.

    PubMed

    Benmakhlouf, Hamza; Andreo, Pedro

    2017-02-01

    Correction factors for the relative dosimetry of narrow megavoltage photon beams have recently been determined in several publications. These corrections are required because of the several small-field effects generally thought to be caused by the lack of lateral charged particle equilibrium (LCPE) in narrow beams. Correction factors for relative dosimetry are ultimately necessary to account for the fluence perturbation caused by the detector. For most small field detectors the perturbation depends on field size, resulting in large correction factors when the field size is decreased. In this work, electron and photon fluence differential in energy will be calculated within the radiation sensitive volume of a number of small field detectors for 6 MV linear accelerator beams. The calculated electron spectra will be used to determine electron fluence perturbation as a function of field size and its implication on small field dosimetry analyzed. Fluence spectra were calculated with the user code PenEasy, based on the PENELOPE Monte Carlo system. The detectors simulated were one liquid ionization chamber, two air ionization chambers, one diamond detector, and six silicon diodes, all manufactured either by PTW or IBA. The spectra were calculated for broad (10 cm × 10 cm) and narrow (0.5 cm × 0.5 cm) photon beams in order to investigate the field size influence on the fluence spectra and its resulting perturbation. The photon fluence spectra were used to analyze the impact of absorption and generation of photons. These will have a direct influence on the electrons generated in the detector radiation sensitive volume. The electron fluence spectra were used to quantify the perturbation effects and their relation to output correction factors. The photon fluence spectra obtained for all detectors were similar to the spectrum in water except for the shielded silicon diodes. The photon fluence in the latter group was strongly influenced, mostly in the low-energy region, by photoabsorption in the high-Z shielding material. For the ionization chambers and the diamond detector, the electron fluence spectra were found to be similar to that in water, for both field sizes. In contrast, electron spectra in the silicon diodes were much higher than that in water for both field sizes. The estimated perturbations of the fluence spectra for the silicon diodes were 11-21% for the large fields and 14-27% for the small fields. These perturbations are related to the atomic number, density and mean excitation energy (I-value) of silicon, as well as to the influence of the "extracameral"' components surrounding the detector sensitive volume. For most detectors the fluence perturbation was also found to increase when the field size was decreased, in consistency with the increased small-field effects observed for the smallest field sizes. The present work improves the understanding of small-field effects by relating output correction factors to spectral fluence perturbations in small field detectors. It is shown that the main reasons for the well-known small-field effects in silicon diodes are the high-Z and density of the "extracameral" detector components and the high I-value of silicon relative to that of water and diamond. Compared to these parameters, the density and atomic number of the radiation sensitive volume material play a less significant role. © 2016 American Association of Physicists in Medicine.

  10. 76 FR 77549 - Lummi Nation-Title 20-Code of Laws-Liquor Code

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Lummi Nation--Title 20--Code of Laws--Liquor... amendment to Lummi Nation's Title 20--Code of Laws--Liquor Code. The Code regulates and controls the... this amendment to Title 20--Lummi Nation Code of Laws--Liquor Code by Resolution 2011-038 on March 1...

  11. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  12. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  13. Amino acid codes in mitochondria as possible clues to primitive codes

    NASA Technical Reports Server (NTRS)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  14. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  15. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  16. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  17. You've Written a Cool Astronomy Code! Now What Do You Do with It?

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.

    2014-01-01

    Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.

  18. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  19. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1989-01-01

    Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.

  20. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  1. Combinatorial neural codes from a mathematical coding theory perspective.

    PubMed

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  2. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  3. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  4. The EDIT-COMGEOM Code

    DTIC Science & Technology

    1975-09-01

    This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code

  5. Impacts of Model Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less

  6. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  7. User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.

    1982-01-01

    This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.

  8. Performance of automated and manual coding systems for occupational data: a case study of historical records.

    PubMed

    Patel, Mehul D; Rose, Kathryn M; Owens, Cindy R; Bang, Heejung; Kaufman, Jay S

    2012-03-01

    Occupational data are a common source of workplace exposure and socioeconomic information in epidemiologic research. We compared the performance of two occupation coding methods, an automated software and a manual coder, using occupation and industry titles from U.S. historical records. We collected parental occupational data from 1920-40s birth certificates, Census records, and city directories on 3,135 deceased individuals in the Atherosclerosis Risk in Communities (ARIC) study. Unique occupation-industry narratives were assigned codes by a manual coder and the Standardized Occupation and Industry Coding software program. We calculated agreement between coding methods of classification into major Census occupational groups. Automated coding software assigned codes to 71% of occupations and 76% of industries. Of this subset coded by software, 73% of occupation codes and 69% of industry codes matched between automated and manual coding. For major occupational groups, agreement improved to 89% (kappa = 0.86). Automated occupational coding is a cost-efficient alternative to manual coding. However, some manual coding is required to code incomplete information. We found substantial variability between coders in the assignment of occupations although not as large for major groups.

  9. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  10. WE-D-BRF-01: FEATURED PRESENTATION - Investigating Particle Track Structures Using Fluorescent Nuclear Track Detectors and Monte Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowdell, S; Paganetti, H; Schuemann, J

    Purpose: To report on the efforts funded by the AAPM seed funding grant to develop the basis for fluorescent nuclear track detector (FNTD) based radiobiological experiments in combination with dedicated Monte Carlo simulations (MCS) on the nanometer scale. Methods: Two confocal microscopes were utilized in this study. Two FNTD samples were used to find the optimal microscope settings, one FNTD irradiated with 11.1 MeV/u Gold ions and one irradiated with 428.77 MeV/u Carbon ions. The first sample provided a brightly luminescent central track while the latter is used to test the capabilities to observe secondary electrons. MCS were performed usingmore » TOPAS beta9 version, layered on top of Geant4.9.6p02. Two sets of simulations were performed, one with the Geant4-DNA physics list and approximating the FNTDs by water, a second set using the Penelope physics list in a water-approximated FNTD and a aluminum-oxide FNTD. Results: Within the first half of the funding period, we have successfully established readout capabilities of FNTDs at our institute. Due to technical limitations, our microscope setup is significantly different from the approach implemented at the DKFZ, Germany. However, we can clearly reconstruct Carbon tracks in 3D with electron track resolution of 200 nm. A second microscope with superior readout capabilities will be tested in the second half of the funding period, we expect an improvement in signal to background ratio with the same the resolution.We have successfully simulated tracks in FNTDs. The more accurate Geant4-DNA track simulations can be used to reconstruct the track energy from the size and brightness of the observed tracks. Conclusion: We have achieved the goals set in the seed funding proposal: the setup of FNTD readout and simulation capabilities. We will work on improving the readout resolution to validate our MCS track structures down to the nanometer scales.« less

  11. Alternative Fuels Data Center: Codes and Standards Resources

    Science.gov Websites

    codes and standards. Biodiesel Vehicle and Infrastructure Codes and Standards Chart Electric Vehicle and Infrastructure Codes and Standards Chart Ethanol Vehicle and Infrastructure Codes and Standards Chart Natural Gas Vehicle and Infrastructure Codes and Standards Chart Propane Vehicle and Infrastructure Codes and

  12. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  13. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-01-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  14. Construction of self-dual codes in the Rosenbloom-Tsfasman metric

    NASA Astrophysics Data System (ADS)

    Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin

    2017-12-01

    Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.

  15. Two Perspectives on the Origin of the Standard Genetic Code

    NASA Astrophysics Data System (ADS)

    Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa

    2014-12-01

    The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.

  16. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  17. Quality improvement of International Classification of Diseases, 9th revision, diagnosis coding in radiation oncology: single-institution prospective study at University of California, San Francisco.

    PubMed

    Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E

    2015-01-01

    Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.

  18. Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design

    DTIC Science & Technology

    2011-02-01

    Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the

  19. New quantum codes derived from a family of antiprimitive BCH codes

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  20. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  1. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    NASA Astrophysics Data System (ADS)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  2. A multidisciplinary approach to vascular surgery procedure coding improves coding accuracy, work relative value unit assignment, and reimbursement.

    PubMed

    Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres

    2016-08-01

    Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes resulting in a higher overall reimbursement that was subsequently corrected. Assessment of physician documentation showed improvement, with decreased documentation errors at each period (11% vs 3.1% vs 0.6%; P = .02). Overall, between period 1 and period 3, multidisciplinary coding resulted in a significant increase in additional reimbursement ($17.63 per procedure; P = .004) and wRVUs (0.50 per procedure; P = .01). External validation at a second academic institution was performed to assess coding accuracy during period 1. Similar to institution 1, traditional coding revealed an 11% loss in reimbursement ($13,178 vs $14,630; P = .007) and a 12% loss in wRVU (293 vs 329; P = .01) compared with multidisciplinary coding. Physician involvement in the coding of endovascular procedures leads to improved procedural coding accuracy, increased wRVU assignments, and increased physician reimbursement. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  3. Manual versus automated coding of free-text self-reported medication data in the 45 and Up Study: a validation study.

    PubMed

    Gnjidic, Danijela; Pearson, Sallie-Anne; Hilmer, Sarah N; Basilakis, Jim; Schaffer, Andrea L; Blyth, Fiona M; Banks, Emily

    2015-03-30

    Increasingly, automated methods are being used to code free-text medication data, but evidence on the validity of these methods is limited. To examine the accuracy of automated coding of previously keyed in free-text medication data compared with manual coding of original handwritten free-text responses (the 'gold standard'). A random sample of 500 participants (475 with and 25 without medication data in the free-text box) enrolled in the 45 and Up Study was selected. Manual coding involved medication experts keying in free-text responses and coding using Anatomical Therapeutic Chemical (ATC) codes (i.e. chemical substance 7-digit level; chemical subgroup 5-digit; pharmacological subgroup 4-digit; therapeutic subgroup 3-digit). Using keyed-in free-text responses entered by non-experts, the automated approach coded entries using the Australian Medicines Terminology database and assigned corresponding ATC codes. Based on manual coding, 1377 free-text entries were recorded and, of these, 1282 medications were coded to ATCs manually. The sensitivity of automated coding compared with manual coding was 79% (n = 1014) for entries coded at the exact ATC level, and 81.6% (n = 1046), 83.0% (n = 1064) and 83.8% (n = 1074) at the 5, 4 and 3-digit ATC levels, respectively. The sensitivity of automated coding for blank responses was 100% compared with manual coding. Sensitivity of automated coding was highest for prescription medications and lowest for vitamins and supplements, compared with the manual approach. Positive predictive values for automated coding were above 95% for 34 of the 38 individual prescription medications examined. Automated coding for free-text prescription medication data shows very high to excellent sensitivity and positive predictive values, indicating that automated methods can potentially be useful for large-scale, medication-related research.

  4. Investigation of Near Shannon Limit Coding Schemes

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Kim, J.; Mo, Fan

    1999-01-01

    Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

  5. Audit of Clinical Coding of Major Head and Neck Operations

    PubMed Central

    Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean

    2009-01-01

    INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944

  6. Reviewing the Challenges and Opportunities Presented by Code Switching and Mixing in Bangla

    ERIC Educational Resources Information Center

    Hasan, Md. Kamrul; Akhand, Mohd. Moniruzzaman

    2015-01-01

    This paper investigates the issues related to code-switching/code-mixing in an ESL context. Some preliminary data on Bangla-English code-switching/code-mixing has been analyzed in order to determine which structural pattern of code-switching/code-mixing is predominant in different social strata. This study also explores the relationship of…

  7. Reviewing the Challenges and Opportunities Presented by Code Switching and Mixing in Bangla

    ERIC Educational Resources Information Center

    Hasan, Md. Kamrul; Akhand, Mohd. Moniruzzaman

    2014-01-01

    This paper investigates the issues related to code-switching/code-mixing in an ESL context. Some preliminary data on Bangla-English code-switching/code-mixing has been analyzed in order to determine which structural pattern of code-switching/code-mixing is predominant in different social strata. This study also explores the relationship of…

  8. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  9. National Underground Mines Inventory

    DTIC Science & Technology

    1983-10-01

    system is well designed to minimize water accumulation on the drift levels. In many areas, sufficient water has accumulated to make the use of boots a...four characters designate Field office. 17-18 State Code Pic 99 FIPS code for state in which minets located. 19-21 County Code Plc 999 FIPS code for... Designate a general product class based onSIC code. 28-29 Nine Type Plc 99 Natal/Nonmetal mine type code. Based on subunit operations code and canvass code

  10. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  11. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  12. Towers of generalized divisible quantum codes

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  13. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  14. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  15. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  16. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING AND CODING VERIFICATION (HAND ENTRY) (UA-D-14.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...

  17. Hyperbolic and semi-hyperbolic surface codes for quantum storage

    NASA Astrophysics Data System (ADS)

    Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.

    2017-09-01

    We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.

  18. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  19. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  20. PNPN Latchup in Bipolar LSI Devices.

    DTIC Science & Technology

    1982-01-01

    6611, J. Ritter ATTN: STEWS -TE-N, T. Arellanes ATTN: Code 6612, G. McLane ATTN: STEWS -TE-AN, R. Dutchover ATTN: Code 6816, H. Hughes ATTN: STEWS -TE...AN, R. Hays ATTN: Code 6653, A. Namenson * ATTN: STEWS -TE-N, K. Cummings ATTN: Code 6611, L. August ATTN: STEWS -TE-NT, M. Squires ATTN: Code 6813, J...Killiany ATTN: STEWS -TE-AN, J. Meason ATTN: Code 6600, J. Schriempf ATTN: STEWS -TE-AN, A. De La Paz ATTN: Code 6610, R. Marlow USA MATTN: Code 2627USA

  1. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  2. What to do with a Dead Research Code

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  3. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  4. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  5. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  6. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  7. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  8. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  9. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  10. Long distance quantum communication with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team

    We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.

  11. Transversal Clifford gates on folded surface codes

    DOE PAGES

    Moussa, Jonathan E.

    2016-10-12

    Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less

  12. Generalized type II hybrid ARQ scheme using punctured convolutional coding

    NASA Astrophysics Data System (ADS)

    Kallel, Samir; Haccoun, David

    1990-11-01

    A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.

  13. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  14. Methodology, status and plans for development and assessment of Cathare code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestion, D.; Barre, F.; Faydide, B.

    1997-07-01

    This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less

  15. Accuracy of external cause-of-injury coding in VA polytrauma patient discharge records.

    PubMed

    Carlson, Kathleen F; Nugent, Sean M; Grill, Joseph; Sayer, Nina A

    2010-01-01

    Valid and efficient methods of identifying the etiology of treated injuries are critical for characterizing patient populations and developing prevention and rehabilitation strategies. We examined the accuracy of external cause-of-injury codes (E-codes) in Veterans Health Administration (VHA) administrative data for a population of injured patients. Chart notes and E-codes were extracted for 566 patients treated at any one of four VHA Polytrauma Rehabilitation Center sites between 2001 and 2006. Two expert coders, blinded to VHA E-codes, used chart notes to assign "gold standard" E-codes to injured patients. The accuracy of VHA E-coding was examined based on these gold standard E-codes. Only 382 of 517 (74%) injured patients were assigned E-codes in VHA records. Sensitivity of VHA E-codes varied significantly by site (range: 59%-91%, p < 0.001). Sensitivity was highest for combat-related injuries (81%) and lowest for fall-related injuries (60%). Overall specificity of E-codes was high (92%). E-coding accuracy was markedly higher when we restricted analyses to records that had been assigned VHA E-codes. E-codes may not be valid for ascertaining source-of-injury data for all injuries among VHA rehabilitation inpatients at this time. Enhanced training and policies may ensure more widespread, standardized use and accuracy of E-codes for injured veterans treated in the VHA.

  16. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... for Residential Construction in High Wind Regions. ICC 700: National Green Building Standard The..., coordinated, and necessary to regulate the built environment. Federal agencies frequently use these codes and... International Codes and Standards consist of the following: ICC Codes International Building Code. International...

  17. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-16

    ... for Residential Construction in High Wind Areas. ICC 700: National Green Building Standard. The... Codes and Standards that are comprehensive, coordinated, and necessary to regulate the built environment... International Codes and Standards consist of the following: ICC Codes International Building Code. International...

  18. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  19. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  20. 47 CFR 15.214 - Cordless telephones.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... discrete digital codes. Factory-set codes must be continuously varied over at least 256 possible codes as... readily select from among at least 256 possible discrete digital codes. The cordless telephone shall be... fixed code that is continuously varied among at least 256 discrete digital codes as each telephone is...

  1. 28 CFR 36.608 - Guidance concerning model codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidance concerning model codes. 36.608... Codes § 36.608 Guidance concerning model codes. Upon application by an authorized representative of a... relevant model code and issue guidance concerning whether and in what respects the model code is consistent...

  2. Coded mask telescopes for X-ray astronomy

    NASA Astrophysics Data System (ADS)

    Skinner, G. K.; Ponman, T. J.

    1987-04-01

    The principle of the coded mask techniques are discussed together with the methods of image reconstruction. The coded mask telescopes built at the University of Birmingham, including the SL 1501 coded mask X-ray telescope flown on the Skylark rocket and the Coded Mask Imaging Spectrometer (COMIS) projected for the Soviet space station Mir, are described. A diagram of a coded mask telescope and some designs for coded masks are included.

  3. Performance of data-compression codes in channels with errors. Final report, October 1986-January 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-10-01

    Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.

  4. Coding in Muscle Disease.

    PubMed

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  5. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 2; Codes for AWGN and Fading Channels

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, DoJun; Lin, Shu

    1997-01-01

    In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.

  6. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Velkov, K.; Lizorkin, M.

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  7. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  8. Processor-in-memory-and-storage architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik

    A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code wordmore » is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.« less

  9. Multi-level bandwidth efficient block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1989-01-01

    The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.

  10. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  11. Quality of recording of diabetes in the UK: how does the GP's method of coding clinical data affect incidence estimates? Cross-sectional study using the CPRD database

    PubMed Central

    Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim

    2017-01-01

    Objective To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. Design A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Setting Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Main outcome measure Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding ‘poor’ quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Results Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled ‘poor’ quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. Conclusions In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. PMID:28122831

  12. Practical guide to bar coding for patient medication safety.

    PubMed

    Neuenschwander, Mark; Cohen, Michael R; Vaida, Allen J; Patchett, Jeffrey A; Kelly, Jamie; Trohimovich, Barbara

    2003-04-15

    Bar coding for the medication administration step of the drug-use process is discussed. FDA will propose a rule in 2003 that would require bar-code labels on all human drugs and biologicals. Even with an FDA mandate, manufacturer procrastination and possible shifts in product availability are likely to slow progress. Such delays should not preclude health systems from adopting bar-code-enabled point-of-care (BPOC) systems to achieve gains in patient safety. Bar-code technology is a replacement for traditional keyboard data entry. The elements of bar coding are content, which determines the meaning; data format, which refers to the embedded data and symbology, which describes the "font" in which the machine-readable code is written. For a BPOC system to deliver an acceptable level of patient protection, the hospital must first establish reliable processes for a patient identification band, caregiver badge, and medication bar coding. Medications can have either drug-specific or patient-specific bar codes. Both varieties result in the desired code that supports patient's five rights of drug administration. When medications are not available from the manufacturer in immediate-container bar-coded packaging, other means of applying the bar code must be devised, including the use of repackaging equipment, overwrapping, manual bar coding, and outsourcing. Virtually all medications should be bar coded, the bar code on the label should be easily readable, and appropriate policies, procedures, and checks should be in place. Bar coding has the potential to be not only cost-effective but to produce a return on investment. By bar coding patient identification tags, caregiver badges, and immediate-container medications, health systems can substantially increase patient safety during medication administration.

  13. Rate-compatible punctured convolutional codes (RCPC codes) and their applications

    NASA Astrophysics Data System (ADS)

    Hagenauer, Joachim

    1988-04-01

    The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.

  14. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  15. Constructing LDPC Codes from Loop-Free Encoding Modules

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  16. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  17. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less

  18. Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network

    DTIC Science & Technology

    1989-08-01

    Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error

  19. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Austregesilo, H.; Velkov, K.

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.

  20. Design of Intelligent Cross-Layer Routing Protocols for Airborne Wireless Networks Under Dynamic Spectrum Access Paradigm

    DTIC Science & Technology

    2011-05-01

    rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The

  1. The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: A heuristic-systematic processing approach.

    PubMed

    Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August

    2018-07-01

    Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. The application of coded excitation technology in medical ultrasonic Doppler imaging

    NASA Astrophysics Data System (ADS)

    Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin

    2008-03-01

    Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.

  3. Assessment of the impact of the change from manual to automated coding on mortality statistics in Australia.

    PubMed

    McKenzie, Kirsten; Walker, Sue; Tong, Shilu

    It remains unclear whether the change from a manual to an automated coding system (ACS) for deaths has significantly affected the consistency of Australian mortality data. The underlying causes of 34,000 deaths registered in 1997 in Australia were dual coded, in ICD-9 manually, and by using an automated computer coding program. The diseases most affected by the change from manual to ACS were senile/presenile dementia, and pneumonia. The most common disease to which a manually assigned underlying cause of senile dementia was coded with ACS was unspecified psychoses (37.2%). Only 12.5% of codes assigned by ACS as senile dementia were coded the same by manual coders. This study indicates some important differences in mortality rates when comparing mortality data that have been coded manually with those coded using an automated computer coding program. These differences may be related to both the different interpretation of ICD coding rules between manual and automated coding, and different co-morbidities or co-existing conditions among demographic groups.

  4. Bandwidth efficient CCSDS coding standard proposals

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan

    1992-01-01

    The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.

  5. QR code for medical information uses.

    PubMed

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  6. Neural representation of objects in space: a dual coding account.

    PubMed Central

    Humphreys, G W

    1998-01-01

    I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227

  7. Financial and clinical governance implications of clinical coding accuracy in neurosurgery: a multidisciplinary audit.

    PubMed

    Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar

    2010-04-01

    Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.

  8. Federal Logistics Information Systems. FLIS Procedures Manual. Document Identifier Code Input/Output Formats (Variable Length). Volume 9.

    DTIC Science & Technology

    1997-04-01

    DATA COLLABORATORS 0001N B NQ 8380 NUMBER OF DATA RECEIVERS 0001N B NQ 2533 AUTHORIZED ITEM IDENTIFICATION DATA COLLABORATOR CODE 0002 ,X B 03 18 TD...01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 9505 TYPE OF SCREENING CODE 0001A 01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 4690 OUTPUT DATA... 9505 TYPE OF SCREENING CODE 0001A 2 89 2910 REFERENCE NUMBER CATEGORY CODE (RNCC) 0001X 2 89 4780 REFERENCE NUMBER VARIATION CODE (RNVC) 0001 N 2 89

  9. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  10. Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, S R; Bihari, B L; Salari, K

    As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.

  11. Concatenated coding for low date rate space communications.

    NASA Technical Reports Server (NTRS)

    Chen, C. H.

    1972-01-01

    In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.

  12. New CPT codes: hospital, consultation, emergency and nursing facility services.

    PubMed

    Zuber, T J; Henley, D E

    1992-03-01

    New evaluation and management codes were created by the Current Procedural Terminology (CPT) Editorial Panel to ensure more accurate and consistent reporting of physician services. The new hospital inpatient codes describe three levels of service for both initial and subsequent care. Critical care services are reported according to the total time spent by a physician providing constant attention to a critically ill patient. Consultation codes are divided into four categories: office/outpatient, initial inpatient, follow-up inpatient and confirmatory. Emergency department services for both new and established patients are limited to five codes. In 1992, nursing facility services are described with either comprehensive-assessment codes or subsequent-care codes. Hospital discharge services may be reported in addition to the comprehensive nursing facility assessment. Since the 1992 CPT book will list only the new codes, and since all insurance carriers will not be using these codes in 1992, physicians are encouraged to keep their 1991 code books and contact their local insurance carriers to determine which codes will be used.

  13. Current Research on Non-Coding Ribonucleic Acid (RNA).

    PubMed

    Wang, Jing; Samuels, David C; Zhao, Shilin; Xiang, Yu; Zhao, Ying-Yong; Guo, Yan

    2017-12-05

    Non-coding ribonucleic acid (RNA) has without a doubt captured the interest of biomedical researchers. The ability to screen the entire human genome with high-throughput sequencing technology has greatly enhanced the identification, annotation and prediction of the functionality of non-coding RNAs. In this review, we discuss the current landscape of non-coding RNA research and quantitative analysis. Non-coding RNA will be categorized into two major groups by size: long non-coding RNAs and small RNAs. In long non-coding RNA, we discuss regular long non-coding RNA, pseudogenes and circular RNA. In small RNA, we discuss miRNA, transfer RNA, piwi-interacting RNA, small nucleolar RNA, small nuclear RNA, Y RNA, single recognition particle RNA, and 7SK RNA. We elaborate on the origin, detection method, and potential association with disease, putative functional mechanisms, and public resources for these non-coding RNAs. We aim to provide readers with a complete overview of non-coding RNAs and incite additional interest in non-coding RNA research.

  14. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  15. Coding in Stroke and Other Cerebrovascular Diseases.

    PubMed

    Korb, Pearce J; Jones, William

    2017-02-01

    Accurate coding is critical for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of coding principles for patients with strokes and other cerebrovascular diseases and includes an illustrative case as a review of coding principles in a patient with acute stroke.

  16. The Evolution and Expression Pattern of Human Overlapping lncRNA and Protein-coding Gene Pairs.

    PubMed

    Ning, Qianqian; Li, Yixue; Wang, Zhen; Zhou, Songwen; Sun, Hong; Yu, Guangjun

    2017-03-27

    Long non-coding RNA overlapping with protein-coding gene (lncRNA-coding pair) is a special type of overlapping genes. Protein-coding overlapping genes have been well studied and increasing attention has been paid to lncRNAs. By studying lncRNA-coding pairs in human genome, we showed that lncRNA-coding pairs were more likely to be generated by overprinting and retaining genes in lncRNA-coding pairs were given higher priority than non-overlapping genes. Besides, the preference of overlapping configurations preserved during evolution was based on the origin of lncRNA-coding pairs. Further investigations showed that lncRNAs promoting the splicing of their embedded protein-coding partners was a unilateral interaction, but the existence of overlapping partners improving the gene expression was bidirectional and the effect was decreased with the increased evolutionary age of genes. Additionally, the expression of lncRNA-coding pairs showed an overall positive correlation and the expression correlation was associated with their overlapping configurations, local genomic environment and evolutionary age of genes. Comparison of the expression correlation of lncRNA-coding pairs between normal and cancer samples found that the lineage-specific pairs including old protein-coding genes may play an important role in tumorigenesis. This work presents a systematically comprehensive understanding of the evolution and the expression pattern of human lncRNA-coding pairs.

  17. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  18. Circular codes revisited: a statistical approach.

    PubMed

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...

  20. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...

  1. Critical Care Coding for Neurologists.

    PubMed

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  2. Coding of Neuroinfectious Diseases.

    PubMed

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  3. Diagnostic Coding for Epilepsy.

    PubMed

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  4. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    ERIC Educational Resources Information Center

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  5. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    ERIC Educational Resources Information Center

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  6. 17 CFR 229.406 - (Item 406) Code of ethics.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false (Item 406) Code of ethics. 229... 406) Code of ethics. (a) Disclose whether the registrant has adopted a code of ethics that applies to... code of ethics, explain why it has not done so. (b) For purposes of this Item 406, the term code of...

  7. VeryVote: A Voter Verifiable Code Voting System

    NASA Astrophysics Data System (ADS)

    Joaquim, Rui; Ribeiro, Carlos; Ferreira, Paulo

    Code voting is a technique used to address the secure platform problem of remote voting. A code voting system consists in secretly sending, e.g. by mail, code sheets to voters that map their choices to entry codes in their ballot. While voting, the voter uses the code sheet to know what code to enter in order to vote for a particular candidate. In effect, the voter does the vote encryption and, since no malicious software on the PC has access to the code sheet it is not able to change the voter’s intention. However, without compromising the voter’s privacy, the vote codes are not enough to prove that the vote is recorded and counted as cast by the election server.

  8. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  9. Locality-preserving logical operators in topological stabilizer codes

    NASA Astrophysics Data System (ADS)

    Webster, Paul; Bartlett, Stephen D.

    2018-01-01

    Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.

  10. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  11. Potential application of item-response theory to interpretation of medical codes in electronic patient records

    PubMed Central

    2011-01-01

    Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509

  12. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  13. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  14. Ethical and educational considerations in coding hand surgeries.

    PubMed

    Lifchez, Scott D; Leinberry, Charles F; Rivlin, Michael; Blazar, Philip E

    2014-07-01

    To assess treatment coding knowledge and practices among residents, fellows, and attending hand surgeons. Through the use of 6 hypothetical cases, we developed a coding survey to assess coding knowledge and practices. We e-mailed this survey to residents, fellows, and attending hand surgeons. In additionally, we asked 2 professional coders to code these cases. A total of 71 participants completed the survey out of 134 people to whom the survey was sent (response rate = 53%). We observed marked disparity in codes chosen among surgeons and among professional coders. Results of this study indicate that coding knowledge, not just its ethical application, had a major role in coding procedures accurately. Surgical coding is an essential part of a hand surgeon's practice and is not well learned during residency or fellowship. Whereas ethical issues such as deliberate unbundling and upcoding may have a role in inaccurate coding, lack of knowledge among surgeons and coders has a major role as well. Coding has a critical role in every hand surgery practice. Inconstancies among those polled in this study reveal that an increase in education on coding during training and improvement in the clarity and consistency of the Current Procedural Terminology coding rules themselves are needed. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  15. Production code control system for hydrodynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less

  16. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    PubMed

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  17. Summary of papers on current and anticipated uses of thermal-hydraulic codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, R.

    1997-07-01

    The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less

  18. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).

  19. A novel neutron energy spectrum unfolding code using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  20. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  1. Statistical inference of static analysis rules

    NASA Technical Reports Server (NTRS)

    Engler, Dawson Richards (Inventor)

    2009-01-01

    Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.

  2. Coding conventions and principles for a National Land-Change Modeling Framework

    USGS Publications Warehouse

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  3. Truncation Depth Rule-of-Thumb for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  4. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-08

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.

  5. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  6. Quantized phase coding and connected region labeling for absolute phase retrieval.

    PubMed

    Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian

    2016-12-12

    This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.

  7. Degenerate quantum codes and the quantum Hamming bound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarvepalli, Pradeep; Klappenecker, Andreas

    2010-03-15

    The parameters of a nondegenerate quantum code must obey the Hamming bound. An important open problem in quantum coding theory is whether the parameters of a degenerate quantum code can violate this bound for nondegenerate quantum codes. In this article we show that Calderbank-Shor-Steane (CSS) codes, over a prime power alphabet q{>=}5, cannot beat the quantum Hamming bound. We prove a quantum version of the Griesmer bound for the CSS codes, which allows us to strengthen the Rains' bound that an [[n,k,d

  8. Protecting quantum memories using coherent parity check codes

    NASA Astrophysics Data System (ADS)

    Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv

    2018-07-01

    Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2

  9. System Design Description for the TMAD Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finfrock, S.H.

    This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.

  10. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    PubMed

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  11. 78 FR 51139 - Notice of Proposed Changes to the National Handbook of Conservation Practices for the Natural...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-20

    ... (Code 324), Field Border (Code 386), Filter Strip (Code 393), Land Smoothing (Code 466), Livestock... the implementation requirement document to the specifications and plans. Filter Strip (Code 393)--The...

  12. Error-Detecting Identification Codes for Algebra Students.

    ERIC Educational Resources Information Center

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  13. QR Codes 101

    ERIC Educational Resources Information Center

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  14. 25 CFR 18.101 - May a tribe create and adopt its own tribal probate code?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false May a tribe create and adopt its own tribal probate code... PROBATE CODES Approval of Tribal Probate Codes § 18.101 May a tribe create and adopt its own tribal probate code? Yes. A tribe may create and adopt a tribal probate code. ...

  15. Automated Coding Software: Development and Use to Enhance Anti-Fraud Activities*

    PubMed Central

    Garvin, Jennifer H.; Watzlaf, Valerie; Moeini, Sohrab

    2006-01-01

    This descriptive research project identified characteristics of automated coding systems that have the potential to detect improper coding and to minimize improper or fraudulent coding practices in the setting of automated coding used with the electronic health record (EHR). Recommendations were also developed for software developers and users of coding products to maximize anti-fraud practices. PMID:17238546

  16. Honoring Native American Code Talkers: The Road to the Code Talkers Recognition Act of 2008 (Public Law 110-420)

    ERIC Educational Resources Information Center

    Meadows, William C.

    2011-01-01

    Interest in North American Indian code talkers continues to increase. In addition to numerous works about the Navajo code talkers, several publications on other groups of Native American code talkers--including the Choctaw, Comanche, Hopi, Meskwaki, Canadian Cree--and about code talkers in general have appeared. This article chronicles recent…

  17. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Gear Codes, Descriptions, and Use 15 Table... ALASKA Pt. 679, Table 15 Table 15 to Part 679—Gear Codes, Descriptions, and Use Gear Codes, Descriptions, and Use (X indicates where this code is used) Name of gear Use alphabetic code to complete the...

  18. Quantum error-correcting codes from algebraic geometry codes of Castle type

    NASA Astrophysics Data System (ADS)

    Munuera, Carlos; Tenório, Wanderson; Torres, Fernando

    2016-10-01

    We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.

  19. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  20. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  1. LDPC coded OFDM over the atmospheric turbulence channel.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  2. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  3. Coding for urologic office procedures.

    PubMed

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Convolution Operations on Coding Metasurface to Reach Flexible and Continuous Controls of Terahertz Beams.

    PubMed

    Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang

    2016-10-01

    The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.

  5. On fuzzy semantic similarity measure for DNA coding.

    PubMed

    Ahmad, Muneer; Jung, Low Tang; Bhuiyan, Md Al-Amin

    2016-02-01

    A coding measure scheme numerically translates the DNA sequence to a time domain signal for protein coding regions identification. A number of coding measure schemes based on numerology, geometry, fixed mapping, statistical characteristics and chemical attributes of nucleotides have been proposed in recent decades. Such coding measure schemes lack the biologically meaningful aspects of nucleotide data and hence do not significantly discriminate coding regions from non-coding regions. This paper presents a novel fuzzy semantic similarity measure (FSSM) coding scheme centering on FSSM codons׳ clustering and genetic code context of nucleotides. Certain natural characteristics of nucleotides i.e. appearance as a unique combination of triplets, preserving special structure and occurrence, and ability to own and share density distributions in codons have been exploited in FSSM. The nucleotides׳ fuzzy behaviors, semantic similarities and defuzzification based on the center of gravity of nucleotides revealed a strong correlation between nucleotides in codons. The proposed FSSM coding scheme attains a significant enhancement in coding regions identification i.e. 36-133% as compared to other existing coding measure schemes tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Bilingual processing of ASL-English code-blends: The consequences of accessing two lexical representations simultaneously

    PubMed Central

    Emmorey, Karen; Petrich, Jennifer; Gollan, Tamar H.

    2012-01-01

    Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for ASL and actually facilitated access to low-frequency signs. However, code-blending delayed speech production because bimodal bilinguals synchronized English and ASL lexical onsets. In comprehension, code-blending speeded access to both languages. Bimodal bilinguals’ ability to produce code-blends without any cost to ASL implies that the language system either has (or can develop) a mechanism for switching off competition to allow simultaneous production of close competitors. Code-blend facilitation effects during comprehension likely reflect cross-linguistic (and cross-modal) integration at the phonological and/or semantic levels. The absence of any consistent processing costs for code-blending illustrates a surprising limitation on dual-task costs and may explain why bimodal bilinguals code-blend more often than they code-switch. PMID:22773886

  7. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  8. An international survey of building energy codes and their implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy demand from buildings. Access to benefits of building energy codes depends on comprehensive coverage of buildings by type, age, size, andmore » geographic location; an implementation framework that involves a certified agency to inspect construction at critical stages; and independently tested, rated, and labeled building energy materials. Training and supporting tools are another element of successful code implementation, and their role is growing in importance, given the increasing flexibility and complexity of building energy codes. Some countries have also introduced compliance evaluation and compliance checking protocols to improve implementation. This article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  9. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  10. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  11. Leveraging the NLM map from SNOMED CT to ICD-10-CM to facilitate adoption of ICD-10-CM.

    PubMed

    Cartagena, F Phil; Schaeffer, Molly; Rifai, Dorothy; Doroshenko, Victoria; Goldberg, Howard S

    2015-05-01

    Develop and test web services to retrieve and identify the most precise ICD-10-CM code(s) for a given clinical encounter. Facilitate creation of user interfaces that 1) provide an initial shortlist of candidate codes, ideally visible on a single screen; and 2) enable code refinement. To satisfy our high-level use cases, the analysis and design process involved reviewing available maps and crosswalks, designing the rule adjudication framework, determining necessary metadata, retrieving related codes, and iteratively improving the code refinement algorithm. The Partners ICD-10-CM Search and Mapping Services (PI-10 Services) are SOAP web services written using Microsoft's.NET 4.0 Framework, Windows Communications Framework, and SQL Server 2012. The services cover 96% of the Partners problem list subset of SNOMED CT codes that map to ICD-10-CM codes and can return up to 76% of the 69,823 billable ICD-10-CM codes prior to creation of custom mapping rules. We consider ways to increase 1) the coverage ratio of the Partners problem list subset of SNOMED CT codes and 2) the upper bound of returnable ICD-10-CM codes by creating custom mapping rules. Future work will investigate the utility of the transitive closure of SNOMED CT codes and other methods to assist in custom rule creation and, ultimately, to provide more complete coverage of ICD-10-CM codes. ICD-10-CM will be easier for clinicians to manage if applications display short lists of candidate codes from which clinicians can subsequently select a code for further refinement. The PI-10 Services support ICD-10 migration by implementing this paradigm and enabling users to consistently and accurately find the best ICD-10-CM code(s) without translation from ICD-9-CM. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Diabetes Mellitus Coding Training for Family Practice Residents.

    PubMed

    Urse, Geraldine N

    2015-07-01

    Although physicians regularly use numeric coding systems such as the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to describe patient encounters, coding errors are common. One of the most complicated diagnoses to code is diabetes mellitus. The ICD-9-CM currently has 39 separate codes for diabetes mellitus; this number will be expanded to more than 50 with the introduction of ICD-10-CM in October 2015. To assess the effect of a 1-hour focused presentation on ICD-9-CM codes on diabetes mellitus coding. A 1-hour focused lecture on the correct use of diabetes mellitus codes for patient visits was presented to family practice residents at Doctors Hospital Family Practice in Columbus, Ohio. To assess resident knowledge of the topic, a pretest and posttest were given to residents before and after the lecture, respectively. Medical records of all patients with diabetes mellitus who were cared for at the hospital 6 weeks before and 6 weeks after the lecture were reviewed and compared for the use of diabetes mellitus ICD-9 codes. Eighteen residents attended the lecture and completed the pretest and posttest. The mean (SD) percentage of correct answers was 72.8% (17.1%) for the pretest and 84.4% (14.6%) for the posttest, for an improvement of 11.6 percentage points (P≤.035). The percentage of total available codes used did not substantially change from before to after the lecture, but the use of the generic ICD-9-CM code for diabetes mellitus type II controlled (250.00) declined (58 of 176 [33%] to 102 of 393 [26%]) and the use of other codes increased, indicating a greater variety in codes used after the focused lecture. After a focused lecture on diabetes mellitus coding, resident coding knowledge improved. Review of medical record data did not reveal an overall change in the number of diabetic codes used after the lecture but did reveal a greater variety in the codes used.

  13. Quality of recording of diabetes in the UK: how does the GP's method of coding clinical data affect incidence estimates? Cross-sectional study using the CPRD database.

    PubMed

    Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim

    2017-01-25

    To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding 'poor' quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled 'poor' quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  15. Is QR code an optimal data container in optical encryption systems from an error-correction coding perspective?

    PubMed

    Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2018-01-01

    Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.

  16. cncRNAs: Bi-functional RNAs with protein coding and non-coding functions

    PubMed Central

    Kumari, Pooja; Sampath, Karuna

    2015-01-01

    For many decades, the major function of mRNA was thought to be to provide protein-coding information embedded in the genome. The advent of high-throughput sequencing has led to the discovery of pervasive transcription of eukaryotic genomes and opened the world of RNA-mediated gene regulation. Many regulatory RNAs have been found to be incapable of protein coding and are hence termed as non-coding RNAs (ncRNAs). However, studies in recent years have shown that several previously annotated non-coding RNAs have the potential to encode proteins, and conversely, some coding RNAs have regulatory functions independent of the protein they encode. Such bi-functional RNAs, with both protein coding and non-coding functions, which we term as ‘cncRNAs’, have emerged as new players in cellular systems. Here, we describe the functions of some cncRNAs identified from bacteria to humans. Because the functions of many RNAs across genomes remains unclear, we propose that RNAs be classified as coding, non-coding or both only after careful analysis of their functions. PMID:26498036

  17. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  18. Continuous Codes and Standards Improvement (CCSI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivkin, Carl H; Burgess, Robert M; Buttner, William J

    2015-10-21

    As of 2014, the majority of the codes and standards required to initially deploy hydrogen technologies infrastructure in the United States have been promulgated. These codes and standards will be field tested through their application to actual hydrogen technologies projects. Continuous codes and standards improvement (CCSI) is a process of identifying code issues that arise during project deployment and then developing codes solutions to these issues. These solutions would typically be proposed amendments to codes and standards. The process is continuous because as technology and the state of safety knowledge develops there will be a need to monitor the applicationmore » of codes and standards and improve them based on information gathered during their application. This paper will discuss code issues that have surfaced through hydrogen technologies infrastructure project deployment and potential code changes that would address these issues. The issues that this paper will address include (1) setback distances for bulk hydrogen storage, (2) code mandated hazard analyses, (3) sensor placement and communication, (4) the use of approved equipment, and (5) system monitoring and maintenance requirements.« less

  19. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  20. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  1. Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1997-01-01

    In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).

  2. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  3. Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms

    DTIC Science & Technology

    2007-09-01

    punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data

  4. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than 180...

  5. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than 180...

  6. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than 180...

  7. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  8. Colour cyclic code for Brillouin distributed sensors

    NASA Astrophysics Data System (ADS)

    Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne

    2015-09-01

    For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.

  9. Cracking the code: the accuracy of coding shoulder procedures and the repercussions.

    PubMed

    Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M

    2013-05-01

    Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p < 0.0001) and the correct procedure code (odds ratio 310.0, p < 0.0001). Using the proforma resulted in a £28,562 increase in revenue for the 100 patients evaluated relative to the income generated from the coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.

  10. 47 CFR 52.19 - Area code relief.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... new area codes within their states. Such matters may include, but are not limited to: Directing... realignment; establishing new area code boundaries; establishing necessary dates for the implementation of... code relief planning encompasses all functions related to the implementation of new area codes that...

  11. 47 CFR 52.19 - Area code relief.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... new area codes within their states. Such matters may include, but are not limited to: Directing... realignment; establishing new area code boundaries; establishing necessary dates for the implementation of... code relief planning encompasses all functions related to the implementation of new area codes that...

  12. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  13. Preliminary Assessment of Turbomachinery Codes

    NASA Technical Reports Server (NTRS)

    Mazumder, Quamrul H.

    2007-01-01

    This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.

  14. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  15. One-way quantum repeaters with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-05-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.

  16. CRUNCH_PARALLEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaker, Dana E.; Steefel, Carl I.

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  17. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  18. World Breastfeeding Week 1994: making the Code work.

    PubMed

    1994-01-01

    WHO adopted the International Code of Marketing of Breastmilk Substitutes in 1981, with the US being the only member voting against it. US abandoned its opposition and voted for the International Code at the World Health Assembly in May 1994. The US was also part of a unanimous vote to promote a resolution that clearly proclaims breast milk to be better than breast milk substitutes and the best food for infants. World Breastfeeding Week 1994 began more efforts to promote the International Code. In 1994, through its Making the Code Work campaign, the World Alliance for Breastfeeding Action (WABA) will work on increasing awareness about the mission and promise of the International Code, notify governments of the Innocenti target date, call for governments to introduce rules and regulations based on the International Code, and encourage public interest groups, professional organizations, and the general public to monitor enforcement of the Code. So far, 11 countries have passed legislation including all or almost all provisions of the International Code. Governments of 36 countries have passed legislation including only some provisions of the International Code. The International Baby Food Action Network (IBFAN), a coalition of more than 140 breastfeeding promotion groups, monitors implementation of the Code worldwide. IBFAN substantiates 1000s of violations of the Code in its report, Breaking the Rules 1994. The violations consist of promoting breast milk substitutes to health workers, using labels describing a brand of formula in idealizing terms, or using labels that do not have warnings in the local language. We should familiarize ourselves with the provisions of the International Code and the status of the Code in our country. WABA provides an action folder which contains basic background information on the code and action ideas.

  19. The additional impact of liaison psychiatry on the future funding of general hospital services.

    PubMed

    Udoh, G; Afif, M; MacHale, S

    2012-01-01

    Accurate coding system is fundamental in determining Casemix, which is likely to become a major determinant of future funding of health care services. Our aim was to determine whether the Hospital Inpatient Enquiry (HIPE) system assigned codes for psychiatric disorders were accurate and reflective of Liaison psychiatric input into patients' care. The HIPE system's coding for psychiatric disorders were compared with psychiatrists' coding for the same patients over a prospective 6 months period, using ICD-10 diagnostic criteria. A total of 262 cases were reviewed of which 135 (51%) were male and 127 (49%) were female. The mean age was 49 years, ranging from 16 years to 87 years (SD 17.3). Our findings show a significant disparity between HIPE and psychiatrists' coding. Only 94 (36%) of the HIPE coded cases were compatible with the psychiatrists' coding. The commonest cause of incompatibility was the coding personnel's failure to code for a psychiatric disorder in the present of one 117 (69.9%), others were coding for a different diagnosis 36 (21%), coding for a psychiatric disorder in the absent of one 11 (6.6%), different sub-type and others 2 (1.2%) respectively. HIPE data coded depression 30 (11.5%) as the commonest diagnosis and general examination 1 (0.4%) as least but failed to code for dementia, illicit drug use and somatoform disorder despite their being coded for by the psychiatrists. In contrast, the psychiatrists coded delirium 46 (18%) and dementia 1 (0.4%) as the commonest and the least diagnosed disorders respectively. Given the marked increase in case complexity associated with psychiatric co-morbidities, future funding streams are at risk of inadequate payment for services rendered.

  20. Compression performance of HEVC and its format range and screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Li, Bin; Xu, Jizheng; Sullivan, Gary J.

    2015-09-01

    This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.

  1. Read-Write-Codes: An Erasure Resilient Encoding System for Flexible Reading and Writing in Storage Networks

    NASA Astrophysics Data System (ADS)

    Mense, Mario; Schindelhauer, Christian

    We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.

  2. Improving accuracy of clinical coding in surgery: collaboration is key.

    PubMed

    Heywood, Nick A; Gill, Michael D; Charlwood, Natasha; Brindle, Rachel; Kirwan, Cliona C

    2016-08-01

    Clinical coding data provide the basis for Hospital Episode Statistics and Healthcare Resource Group codes. High accuracy of this information is required for payment by results, allocation of health and research resources, and public health data and planning. We sought to identify the level of accuracy of clinical coding in general surgical admissions across hospitals in the Northwest of England. Clinical coding departments identified a total of 208 emergency general surgical patients discharged between 1st March and 15th August 2013 from seven hospital trusts (median = 20, range = 16-60). Blinded re-coding was performed by a senior clinical coder and clinician, with results compared with the original coding outcome. Recorded codes were generated from OPCS-4 & ICD-10. Of all cases, 194 of 208 (93.3%) had at least one coding error and 9 of 208 (4.3%) had errors in both primary diagnosis and primary procedure. Errors were found in 64 of 208 (30.8%) of primary diagnoses and 30 of 137 (21.9%) of primary procedure codes. Median tariff using original codes was £1411.50 (range, £409-9138). Re-calculation using updated clinical codes showed a median tariff of £1387.50, P = 0.997 (range, £406-10,102). The most frequent reasons for incorrect coding were "coder error" and a requirement for "clinical interpretation of notes". Errors in clinical coding are multifactorial and have significant impact on primary diagnosis, potentially affecting the accuracy of Hospital Episode Statistics data and in turn the allocation of health care resources and public health planning. As we move toward surgeon specific outcomes, surgeons should increase collaboration with coding departments to ensure the system is robust. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. The impact of three discharge coding methods on the accuracy of diagnostic coding and hospital reimbursement for inpatient medical care.

    PubMed

    Tsopra, Rosy; Peckham, Daniel; Beirne, Paul; Rodger, Kirsty; Callister, Matthew; White, Helen; Jais, Jean-Philippe; Ghosh, Dipansu; Whitaker, Paul; Clifton, Ian J; Wyatt, Jeremy C

    2018-07-01

    Coding of diagnoses is important for patient care, hospital management and research. However coding accuracy is often poor and may reflect methods of coding. This study investigates the impact of three alternative coding methods on the inaccuracy of diagnosis codes and hospital reimbursement. Comparisons of coding inaccuracy were made between a list of coded diagnoses obtained by a coder using (i)the discharge summary alone, (ii)case notes and discharge summary, and (iii)discharge summary with the addition of medical input. For each method, inaccuracy was determined for the primary, secondary diagnoses, Healthcare Resource Group (HRG) and estimated hospital reimbursement. These data were then compared with a gold standard derived by a consultant and coder. 107 consecutive patient discharges were analysed. Inaccuracy of diagnosis codes was highest when a coder used the discharge summary alone, and decreased significantly when the coder used the case notes (70% vs 58% respectively, p < 0.0001) or coded from the discharge summary with medical support (70% vs 60% respectively, p < 0.0001). When compared with the gold standard, the percentage of incorrect HRGs was 42% for discharge summary alone, 31% for coding with case notes, and 35% for coding with medical support. The three coding methods resulted in an annual estimated loss of hospital remuneration of between £1.8 M and £16.5 M. The accuracy of diagnosis codes and percentage of correct HRGs improved when coders used either case notes or medical support in addition to the discharge summary. Further emphasis needs to be placed on improving the standard of information recorded in discharge summaries. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Astronomy education and the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, Robert J.

    2016-01-01

    The Astrophysics Source Code Library (ASCL) is an online registry of source codes used in refereed astrophysics research. It currently lists nearly 1,200 codes and covers all aspects of computational astrophysics. How can this resource be of use to educators and to the graduate students they mentor? The ASCL serves as a discovery tool for codes that can be used for one's own research. Graduate students can also investigate existing codes to see how common astronomical problems are approached numerically in practice, and use these codes as benchmarks for their own solutions to these problems. Further, they can deepen their knowledge of software practices and techniques through examination of others' codes.

  5. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  6. Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels

    NASA Technical Reports Server (NTRS)

    Moher, Michael L.; Lodge, John H.

    1990-01-01

    A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.

  7. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  8. Separable concatenated codes with iterative map decoding for Rician fading channels

    NASA Technical Reports Server (NTRS)

    Lodge, J. H.; Young, R. J.

    1993-01-01

    Very efficient signalling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. In this paper, powerful codes are obtained by combining comparatively simple convolutional codes to form multi-tiered 'separable' convolutional codes. The decoding of these codes, using separable symbol-by-symbol maximum a posteriori (MAP) 'filters', is described. It is known that this approach yields impressive results in non-fading additive white Gaussian noise channels. Interleaving is an inherent part of the code construction, and consequently, these codes are well suited for fading channel communications. Here, simulation results for communications over Rician fading channels are presented to support this claim.

  9. Generating Code Review Documentation for Auto-Generated Mission-Critical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2009-01-01

    Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the autogenerated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews.

  10. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A

    2013-10-29

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  11. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY

    2012-08-28

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  12. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  13. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  14. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  15. Manually operated coded switch

    DOEpatents

    Barnette, Jon H.

    1978-01-01

    The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.

  16. 24 CFR 92.251 - Property standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., as applicable, one of three model codes: Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI); or the Council of American Building Officials (CABO) one or two...) Housing that is constructed or rehabilitated with HOME funds must meet all applicable local codes...

  17. 24 CFR 941.203 - Design and construction standards.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... national building code, such as Uniform Building Code, Council of American Building Officials Code, or Building Officials Conference of America Code; (2) Applicable State and local laws, codes, ordinances, and... intended to serve. Building design and construction shall strive to encourage in residents a proprietary...

  18. 7 CFR 4274.337 - Other regulatory requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... with the seismic provisions of one of the following model building codes or the latest edition of that...) Uniform Building Code; (ii) 1993 Building Officials and Code Administrators International, Inc. (BOCA) National Building Code; or (iii) 1992 Amendments to the Southern Building Code Congress International...

  19. 24 CFR 92.251 - Property standards.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., as applicable, one of three model codes: Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI); or the Council of American Building Officials (CABO) one or two...) Housing that is constructed or rehabilitated with HOME funds must meet all applicable local codes...

  20. 7 CFR 4274.337 - Other regulatory requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... with the seismic provisions of one of the following model building codes or the latest edition of that...) Uniform Building Code; (ii) 1993 Building Officials and Code Administrators International, Inc. (BOCA) National Building Code; or (iii) 1992 Amendments to the Southern Building Code Congress International...

  1. 24 CFR 941.203 - Design and construction standards.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... national building code, such as Uniform Building Code, Council of American Building Officials Code, or Building Officials Conference of America Code; (2) Applicable State and local laws, codes, ordinances, and... intended to serve. Building design and construction shall strive to encourage in residents a proprietary...

  2. 41 CFR 128-1.8005 - Seismic safety standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the model building codes that the Interagency Committee on Seismic Safety in Construction (ICSSC...) Uniform Building Code (UBC); (2) The 1992 Supplement to the Building Officials and Code Administrators International (BOCA) National Building Code (NBC); and (3) The 1992 Amendments to the Southern Building Code...

  3. 38 CFR 39.63 - Architectural design standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 5000, Building Construction and Safety Code, and the 2002 edition of the National Electrical Code, NFPA... 5000, Building Construction and Safety Code. Where the adopted codes state conflicting requirements... the standards set forth in this section, all applicable local and State building codes and regulations...

  4. 7 CFR 4274.337 - Other regulatory requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... with the seismic provisions of one of the following model building codes or the latest edition of that...) Uniform Building Code; (ii) 1993 Building Officials and Code Administrators International, Inc. (BOCA) National Building Code; or (iii) 1992 Amendments to the Southern Building Code Congress International...

  5. 41 CFR 128-1.8005 - Seismic safety standards.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the model building codes that the Interagency Committee on Seismic Safety in Construction (ICSSC...) Uniform Building Code (UBC); (2) The 1992 Supplement to the Building Officials and Code Administrators International (BOCA) National Building Code (NBC); and (3) The 1992 Amendments to the Southern Building Code...

  6. 38 CFR 39.63 - Architectural design standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 5000, Building Construction and Safety Code, and the 2002 edition of the National Electrical Code, NFPA... 5000, Building Construction and Safety Code. Where the adopted codes state conflicting requirements... the standards set forth in this section, all applicable local and State building codes and regulations...

  7. 38 CFR 39.63 - Architectural design standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 5000, Building Construction and Safety Code, and the 2002 edition of the National Electrical Code, NFPA... 5000, Building Construction and Safety Code. Where the adopted codes state conflicting requirements... the standards set forth in this section, all applicable local and State building codes and regulations...

  8. 41 CFR 128-1.8005 - Seismic safety standards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the model building codes that the Interagency Committee on Seismic Safety in Construction (ICSSC...) Uniform Building Code (UBC); (2) The 1992 Supplement to the Building Officials and Code Administrators International (BOCA) National Building Code (NBC); and (3) The 1992 Amendments to the Southern Building Code...

  9. 41 CFR 128-1.8005 - Seismic safety standards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the model building codes that the Interagency Committee on Seismic Safety in Construction (ICSSC...) Uniform Building Code (UBC); (2) The 1992 Supplement to the Building Officials and Code Administrators International (BOCA) National Building Code (NBC); and (3) The 1992 Amendments to the Southern Building Code...

  10. 24 CFR 92.251 - Property standards.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., as applicable, one of three model codes: Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI); or the Council of American Building Officials (CABO) one or two...) Housing that is constructed or rehabilitated with HOME funds must meet all applicable local codes...

  11. 24 CFR 92.251 - Property standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., as applicable, one of three model codes: Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI); or the Council of American Building Officials (CABO) one or two...) Housing that is constructed or rehabilitated with HOME funds must meet all applicable local codes...

  12. 77 FR 12764 - POSTNET Barcode Discontinuation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... routing code appears in the lower right corner. * * * * * [Delete current 5.6, DPBC Numeric Equivalent, in... correct ZIP Code, ZIP+4 code, or numeric equivalent to the delivery point routing code and which meets... equivalent to the delivery point routing code is formed by [[Page 12766

  13. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  14. Bar code, good for industry and trade--how does it benefit the dentist?

    PubMed

    Oehlmann, H

    2001-10-01

    Every dentist who attentively follows the change in product labelling can easily see that the HIBC bar code is on the increase. In fact, according to information from FIDE/VDDI and ADE/BVD, the dental industry and trade are firmly resolved to apply the HIBC bar code to all products used internationally in dental practices. Why? Indeed, at first it looks like extra expense to additionally print a bar code on the packages. Good reasons can only lie in advantages which manufacturers and the trade expect from the HIBC bar code, Indications in dental technician circles are that the HIBC bar code is coming. If there are advantages, what are these, and can the dentist also profit from them? What does HIBC bar code mean and what items of interest does it include? What does bar code cost and does only one code exist? This is explained briefly, concentrating on the benefits bar code can bring for different users.

  15. Normative lessons: codes of conduct, self-regulation and the law.

    PubMed

    Parker, Malcolm H

    2010-06-07

    Good medical practice: a code of conduct for doctors in Australia provides uniform standards to be applied in relation to complaints about doctors to the new Medical Board of Australia. The draft Code was criticised for being prescriptive. The final Code employs apparently less authoritative wording than the draft Code, but the implicit obligations it contains are no less prescriptive. Although the draft Code was thought to potentially undermine trust in doctors, and stifle professional judgement in relation to individual patients, its general obligations always allowed for flexibility of application, depending on the circumstances of individual patients. Professional codes may contain some aspirational statements, but they always contain authoritative ones, and they share this feature with legal codes. In successfully diluting the apparent prescriptivity of the draft Code, the profession has lost an opportunity to demonstrate its commitment to the raison d'etre of self-regulation - the protection of patients. Professional codes are not opportunities for reflection, consideration and debate, but are outcomes of these activities.

  16. Ancient DNA sequence revealed by error-correcting codes.

    PubMed

    Brandão, Marcelo M; Spoladore, Larissa; Faria, Luzinete C B; Rocha, Andréa S L; Silva-Filho, Marcio C; Palazzo, Reginaldo

    2015-07-10

    A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code.

  17. Ancient DNA sequence revealed by error-correcting codes

    PubMed Central

    Brandão, Marcelo M.; Spoladore, Larissa; Faria, Luzinete C. B.; Rocha, Andréa S. L.; Silva-Filho, Marcio C.; Palazzo, Reginaldo

    2015-01-01

    A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code. PMID:26159228

  18. Correlation approach to identify coding regions in DNA sequences

    NASA Technical Reports Server (NTRS)

    Ossadnik, S. M.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Mantegna, R. N.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1994-01-01

    Recently, it was observed that noncoding regions of DNA sequences possess long-range power-law correlations, whereas coding regions typically display only short-range correlations. We develop an algorithm based on this finding that enables investigators to perform a statistical analysis on long DNA sequences to locate possible coding regions. The algorithm is particularly successful in predicting the location of lengthy coding regions. For example, for the complete genome of yeast chromosome III (315,344 nucleotides), at least 82% of the predictions correspond to putative coding regions; the algorithm correctly identified all coding regions larger than 3000 nucleotides, 92% of coding regions between 2000 and 3000 nucleotides long, and 79% of coding regions between 1000 and 2000 nucleotides. The predictive ability of this new algorithm supports the claim that there is a fundamental difference in the correlation property between coding and noncoding sequences. This algorithm, which is not species-dependent, can be implemented with other techniques for rapidly and accurately locating relatively long coding regions in genomic sequences.

  19. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    PubMed

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  20. An analysis of the metabolic theory of the origin of the genetic code

    NASA Technical Reports Server (NTRS)

    Amirnovin, R.; Bada, J. L. (Principal Investigator)

    1997-01-01

    A computer program was used to test Wong's coevolution theory of the genetic code. The codon correlations between the codons of biosynthetically related amino acids in the universal genetic code and in randomly generated genetic codes were compared. It was determined that many codon correlations are also present within random genetic codes and that among the random codes there are always several which have many more correlations than that found in the universal code. Although the number of correlations depends on the choice of biosynthetically related amino acids, the probability of choosing a random genetic code with the same or greater number of codon correlations as the universal genetic code was found to vary from 0.1% to 34% (with respect to a fairly complete listing of related amino acids). Thus, Wong's theory that the genetic code arose by coevolution with the biosynthetic pathways of amino acids, based on codon correlations between biosynthetically related amino acids, is statistical in nature.

  1. Development of code evaluation criteria for assessing predictive capability and performance

    NASA Technical Reports Server (NTRS)

    Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.

    1993-01-01

    Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.

  2. Construction of a new regular LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng

    2013-05-01

    A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.

  3. Disclosure of terminal illness to patients and families: diversity of governing codes in 14 Islamic countries.

    PubMed

    Abdulhameed, Hunida E; Hammami, Muhammad M; Mohamed, Elbushra A Hameed

    2011-08-01

    The consistency of codes governing disclosure of terminal illness to patients and families in Islamic countries has not been studied until now. To review available codes on disclosure of terminal illness in Islamic countries. DATA SOURCE AND EXTRACTION: Data were extracted through searches on Google and PubMed. Codes related to disclosure of terminal illness to patients or families were abstracted, and then classified independently by the three authors. Codes for 14 Islamic countries were located. Five codes were silent regarding informing the patient, seven allowed concealment, one mandated disclosure and one prohibited disclosure. Five codes were silent regarding informing the family, four allowed disclosure and five mandated/recommended disclosure. The Islamic Organization for Medical Sciences code was silent on both issues. Codes regarding disclosure of terminal illness to patients and families differed markedly among Islamic countries. They were silent in one-third of the codes, and tended to favour a paternalistic/utilitarian, family-centred approach over an autonomous, patient-centred approach.

  4. Nonuniform code concatenation for universal fault-tolerant quantum computing

    NASA Astrophysics Data System (ADS)

    Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza

    2017-09-01

    Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.

  5. Professional codes in a changing nursing context: literature review.

    PubMed

    Meulenbergs, Tom; Verpeet, Ellen; Schotsmans, Paul; Gastmans, Chris

    2004-05-01

    Professional codes played a definitive role during a specific period of time, when the professional context of nursing was characterized by an increasing professionalization. Today, however, this professional context has changed. This paper reports on a study which aimed to explore the meaning of professional codes in the current context of the nursing profession. A literature review on professional codes and the nursing profession was carried out. The literature was systematically investigated using the electronic databases PubMed and The Philosopher's Index, and the keywords nursing codes, professional codes in nursing, ethics codes/ethical codes, professional ethics. Due to the nursing profession's growing multidisciplinary nature, the increasing dominance of economic discourse, and the intensified legal framework in which health care professionals need to operate, the context of nursing is changing. In this changed professional context, nursing professional codes have to accommodate to the increasing ethical demands placed upon the profession. Therefore, an ethicization of these codes is desirable, and their moral objectives need to be revalued.

  6. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  7. Certifying Auto-Generated Flight Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen

    2008-01-01

    Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.

  8. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  9. A Working Model for the System Alumina-Magnesia.

    DTIC Science & Technology

    1983-05-01

    Several regions in the resulting diagram appear rather uncertain: the liquidus ’National bureau of StandaTds. JANAF Thermochemical Tables, by D. R. Stull ...Code 131) 1 Naval Ordnance Station, Indian Head (Technical Library) 29 Naval Postgraduate School. Monterey Code 012, Dean of Research (1) Code 06... Dean of Science and Engineering (1) Code 1424. Library - Technical Reports (2) Code 33. Weapons Engineering Program Office (1) Code 61. Chairman

  10. Iterative demodulation and decoding of coded non-square QAM

    NASA Technical Reports Server (NTRS)

    Li, L.; Divsalar, D.; Dolinar, S.

    2003-01-01

    Simulation results show that, with iterative demodulation and decoding, coded NS-8QAM performs 0.5 dB better than standard 8QAM and 0.7 dB better than 8PSK at BER= 10(sup -5), when the FEC code is the (15, 11) Hamming code concatenated with a rate-1 accumulator code, while coded NS-32QAM performs 0.25 dB better than standard 32QAM.

  11. High-Content Optical Codes for Protecting Rapid Diagnostic Tests from Counterfeiting.

    PubMed

    Gökçe, Onur; Mercandetti, Cristina; Delamarche, Emmanuel

    2018-06-19

    Warnings and reports on counterfeit diagnostic devices are released several times a year by regulators and public health agencies. Unfortunately, mishandling, altering, and counterfeiting point-of-care diagnostics (POCDs) and rapid diagnostic tests (RDTs) is lucrative, relatively simple and can lead to devastating consequences. Here, we demonstrate how to implement optical security codes in silicon- and nitrocellulose-based flow paths for device authentication using a smartphone. The codes are created by inkjet spotting inks directly on nitrocellulose or on micropillars. Codes containing up to 32 elements per mm 2 and 8 colors can encode as many as 10 45 combinations. Codes on silicon micropillars can be erased by setting a continuous flow path across the entire array of code elements or for nitrocellulose by simply wicking a liquid across the code. Static or labile code elements can further be formed on nitrocellulose to create a hidden code using poly(ethylene glycol) (PEG) or glycerol additives to the inks. More advanced codes having a specific deletion sequence can also be created in silicon microfluidic devices using an array of passive routing nodes, which activate in a particular, programmable sequence. Such codes are simple to fabricate, easy to view, and efficient in coding information; they can be ideally used in combination with information on a package to protect diagnostic devices from counterfeiting.

  12. A Large Scale Code Resolution Service Network in the Internet of Things

    PubMed Central

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  13. ClinicalCodes: an online clinical codes repository to improve the validity and reproducibility of research using electronic medical records.

    PubMed

    Springate, David A; Kontopantelis, Evangelos; Ashcroft, Darren M; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David

    2014-01-01

    Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects.

  14. A Catalogue of Putative cis-Regulatory Interactions Between Long Non-coding RNAs and Proximal Coding Genes Based on Correlative Analysis Across Diverse Human Tumors.

    PubMed

    Basu, Swaraj; Larsson, Erik

    2018-05-31

    Antisense transcripts and other long non-coding RNAs are pervasive in mammalian cells, and some of these molecules have been proposed to regulate proximal protein-coding genes in cis For example, non-coding transcription can contribute to inactivation of tumor suppressor genes in cancer, and antisense transcripts have been implicated in the epigenetic inactivation of imprinted genes. However, our knowledge is still limited and more such regulatory interactions likely await discovery. Here, we make use of available gene expression data from a large compendium of human tumors to generate hypotheses regarding non-coding-to-coding cis -regulatory relationships with emphasis on negative associations, as these are less likely to arise for reasons other than cis -regulation. We document a large number of possible regulatory interactions, including 193 coding/non-coding pairs that show expression patterns compatible with negative cis -regulation. Importantly, by this approach we capture several known cases, and many of the involved coding genes have known roles in cancer. Our study provides a large catalog of putative non-coding/coding cis -regulatory pairs that may serve as a basis for further experimental validation and characterization. Copyright © 2018 Basu and Larsson.

  15. Clinical coding of prospectively identified paediatric adverse drug reactions--a retrospective review of patient records.

    PubMed

    Bellis, Jennifer R; Kirkham, Jamie J; Nunn, Anthony J; Pirmohamed, Munir

    2014-12-17

    National Health Service (NHS) hospitals in the UK use a system of coding for patient episodes. The coding system used is the International Classification of Disease (ICD-10). There are ICD-10 codes which may be associated with adverse drug reactions (ADRs) and there is a possibility of using these codes for ADR surveillance. This study aimed to determine whether ADRs prospectively identified in children admitted to a paediatric hospital were coded appropriately using ICD-10. The electronic admission abstract for each patient with at least one ADR was reviewed. A record was made of whether the ADR(s) had been coded using ICD-10. Of 241 ADRs, 76 (31.5%) were coded using at least one ICD-10 ADR code. Of the oncology ADRs, 70/115 (61%) were coded using an ICD-10 ADR code compared with 6/126 (4.8%) non-oncology ADRs (difference in proportions 56%, 95% CI 46.2% to 65.8%; p < 0.001). The majority of ADRs detected in a prospective study at a paediatric centre would not have been identified if the study had relied on ICD-10 codes as a single means of detection. Data derived from administrative healthcare databases are not reliable for identifying ADRs by themselves, but may complement other methods of detection.

  16. A large scale code resolution service network in the Internet of Things.

    PubMed

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-11-07

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.

  17. Advanced coding and modulation schemes for TDRSS

    NASA Technical Reports Server (NTRS)

    Harrell, Linda; Kaplan, Ted; Berman, Ted; Chang, Susan

    1993-01-01

    This paper describes the performance of the Ungerboeck and pragmatic 8-Phase Shift Key (PSK) Trellis Code Modulation (TCM) coding techniques with and without a (255,223) Reed-Solomon outer code as they are used for Tracking Data and Relay Satellite System (TDRSS) S-Band and Ku-Band return services. The performance of these codes at high data rates is compared to uncoded Quadrature PSK (QPSK) and rate 1/2 convolutionally coded QPSK in the presence of Radio Frequency Interference (RFI), self-interference, and hardware distortions. This paper shows that the outer Reed-Solomon code is necessary to achieve a 10(exp -5) Bit Error Rate (BER) with an acceptable level of degradation in the presence of RFI. This paper also shows that the TCM codes with or without the Reed-Solomon outer code do not perform well in the presence of self-interference. In fact, the uncoded QPSK signal performs better than the TCM coded signal in the self-interference situation considered in this analysis. Finally, this paper shows that the E(sub b)/N(sub 0) degradation due to TDRSS hardware distortions is approximately 1.3 dB with a TCM coded signal or a rate 1/2 convolutionally coded QPSK signal and is 3.2 dB with an uncoded QPSK signal.

  18. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.

    1986-01-01

    High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.

  19. ClinicalCodes: An Online Clinical Codes Repository to Improve the Validity and Reproducibility of Research Using Electronic Medical Records

    PubMed Central

    Springate, David A.; Kontopantelis, Evangelos; Ashcroft, Darren M.; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David

    2014-01-01

    Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects. PMID:24941260

  20. Improving the accuracy of operation coding in surgical discharge summaries

    PubMed Central

    Martinou, Eirini; Shouls, Genevieve; Betambeau, Nadine

    2014-01-01

    Procedural coding in surgical discharge summaries is extremely important; as well as communicating to healthcare staff which procedures have been performed, it also provides information that is used by the hospital's coding department. The OPCS code (Office of Population, Censuses and Surveys Classification of Surgical Operations and Procedures) is used to generate the tariff that allows the hospital to be reimbursed for the procedure. We felt that the OPCS coding on discharge summaries was often incorrect within our breast and endocrine surgery department. A baseline measurement over two months demonstrated that 32% of operations had been incorrectly coded, resulting in an incorrect tariff being applied and an estimated loss to the Trust of £17,000. We developed a simple but specific OPCS coding table in collaboration with the clinical coding team and breast surgeons that summarised all operations performed within our department. This table was disseminated across the team, specifically to the junior doctors who most frequently complete the discharge summaries. Re-audit showed 100% of operations were accurately coded, demonstrating the effectiveness of the coding table. We suggest that specifically designed coding tables be introduced across each surgical department to ensure accurate OPCS codes are used to produce better quality surgical discharge summaries and to ensure correct reimbursement to the Trust. PMID:26734286

  1. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    PubMed

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Residus de 2-formes differentielles sur les surfaces algebriques et applications aux codes correcteurs d'erreurs

    NASA Astrophysics Data System (ADS)

    Couvreur, A.

    2009-05-01

    The theory of algebraic-geometric codes has been developed in the beginning of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic curve X over a finite field, there are two different constructions of error-correcting codes. The first one, called "functional", uses some rational functions on X and the second one, called "differential", involves some rational 1-forms on this curve. Hundreds of papers are devoted to the study of such codes. In addition, a generalization of the functional construction for algebraic varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A few papers about such codes has been published, but nothing has been done concerning a generalization of the differential construction to the higher-dimensional case. In this thesis, we propose a differential construction of codes on algebraic surfaces. Afterwards, we study the properties of these codes and particularly their relations with functional codes. A pretty surprising fact is that a main difference with the case of curves appears. Indeed, if in the case of curves, a differential code is always the orthogonal of a functional one, this assertion generally fails for surfaces. Last observation motivates the study of codes which are the orthogonal of some functional code on a surface. Therefore, we prove that, under some condition on the surface, these codes can be realized as sums of differential codes. Moreover, we show that some answers to some open problems "a la Bertini" could give very interesting informations on the parameters of these codes.

  3. Identifying Pediatric Severe Sepsis and Septic Shock: Accuracy of Diagnosis Codes.

    PubMed

    Balamuth, Fran; Weiss, Scott L; Hall, Matt; Neuman, Mark I; Scott, Halden; Brady, Patrick W; Paul, Raina; Farris, Reid W D; McClead, Richard; Centkowski, Sierra; Baumer-Mouradian, Shannon; Weiser, Jason; Hayes, Katie; Shah, Samir S; Alpern, Elizabeth R

    2015-12-01

    To evaluate accuracy of 2 established administrative methods of identifying children with sepsis using a medical record review reference standard. Multicenter retrospective study at 6 US children's hospitals. Subjects were children >60 days to <19 years of age and identified in 4 groups based on International Classification of Diseases, Ninth Revision, Clinical Modification codes: (1) severe sepsis/septic shock (sepsis codes); (2) infection plus organ dysfunction (combination codes); (3) subjects without codes for infection, organ dysfunction, or severe sepsis; and (4) infection but not severe sepsis or organ dysfunction. Combination codes were allowed, but not required within the sepsis codes group. We determined the presence of reference standard severe sepsis according to consensus criteria. Logistic regression was performed to determine whether addition of codes for sepsis therapies improved case identification. A total of 130 out of 432 subjects met reference SD of severe sepsis. Sepsis codes had sensitivity 73% (95% CI 70-86), specificity 92% (95% CI 87-95), and positive predictive value 79% (95% CI 70-86). Combination codes had sensitivity 15% (95% CI 9-22), specificity 71% (95% CI 65-76), and positive predictive value 18% (95% CI 11-27). Slight improvements in model characteristics were observed when codes for vasoactive medications and endotracheal intubation were added to sepsis codes (c-statistic 0.83 vs 0.87, P = .008). Sepsis specific International Classification of Diseases, Ninth Revision, Clinical Modification codes identify pediatric patients with severe sepsis in administrative data more accurately than a combination of codes for infection plus organ dysfunction. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?

    PubMed

    Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D

    2016-01-01

    The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.

  5. Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?

    PubMed Central

    Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.

    2017-01-01

    Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875

  6. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... Plumbing Code, 1993 Edition, and the BOCA National Mechanical Code, 1993 Edition, excluding Chapter I, Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood...

  7. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... Plumbing Code, 1993 Edition, and the BOCA National Mechanical Code, 1993 Edition, excluding Chapter I, Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood...

  8. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.

  9. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  10. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  11. 24 CFR 578.75 - General operations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... assistance under this part must meet State or local building codes, and in the absence of State or local building codes, the International Residential Code or International Building Code (as applicable to the type of structure) of the International Code Council. (2) Services provided with assistance under this...

  12. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  13. 24 CFR 578.75 - General operations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... assistance under this part must meet State or local building codes, and in the absence of State or local building codes, the International Residential Code or International Building Code (as applicable to the type of structure) of the International Code Council. (2) Services provided with assistance under this...

  14. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  15. Reimbursement Policies for Carotid Duplex Ultrasound that are Based on International Classification of Diseases Codes May Discourage Testing in High-Yield Groups.

    PubMed

    Go, Michael R; Masterson, Loren; Veerman, Brent; Satiani, Bhagwan

    2016-02-01

    To curb increasing volumes of diagnostic imaging and costs, reimbursement for carotid duplex ultrasound (CDU) is dependent on "appropriate" indications as documented by International Classification of Diseases (ICD) codes entered by ordering physicians. Historically, asymptomatic indications for CDU yield lower rates of abnormal results than symptomatic indications, and consensus documents agree that most asymptomatic indications for CDU are inappropriate. In our vascular laboratory, we perceived an increased rate of incorrect or inappropriate ICD codes. We therefore sought to determine if ICD codes were useful in predicting the frequency of abnormal CDU. We hypothesized that asymptomatic or nonspecific ICD codes would yield a lower rate of abnormal CDU than symptomatic codes, validating efforts to limit reimbursement in asymptomatic, low-yield groups. We reviewed all outpatient CDU done in 2011 at our institution. ICD codes were recorded, and each medical record was then reviewed by a vascular surgeon to determine if the assigned ICD code appropriately reflected the clinical scenario. CDU findings categorized as abnormal (>50% stenosis) or normal (<50% stenosis) were recorded. Each individual ICD code and group 1 (asymptomatic), group 2 (nonhemispheric symptoms), group 3 (hemispheric symptoms), group 4 (preoperative cardiovascular examination), and group 5 (nonspecific) ICD codes were analyzed for correlation with CDU results. Nine hundred ninety-four patients had 74 primary ICD codes listed as indications for CDU. Of assigned ICD codes, 17.4% were deemed inaccurate. Overall, 14.8% of CDU were abnormal. Of the 13 highest frequency ICD codes, only 433.10, an asymptomatic code, was associated with abnormal CDU. Four symptomatic codes were associated with normal CDU; none of the other high frequency codes were associated with CDU result. Patients in group 1 (asymptomatic) were significantly more likely to have an abnormal CDU compared to each of the other groups (P < 0.001, P < 0.001, P = 0.020, P = 0.002) and to all other groups combined (P < 0.001). Asymptomatic indications by ICD codes yielded higher rates of abnormal CDU than symptomatic indications. This finding is inconsistent with clinical experience and historical data, and we suggest that inaccurate coding may play a role. Limiting reimbursement for CDU in low-yield groups is reasonable. However, reimbursement policies based on ICD coding, for example, limiting payment for asymptomatic ICD codes, may impede use of CDU in high-yield patient groups. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. A Dual Coding View of Vocabulary Learning

    ERIC Educational Resources Information Center

    Sadoski, Mark

    2005-01-01

    A theoretical perspective on acquiring sight vocabulary and developing meaningful vocabulary is presented. Dual Coding Theory assumes that cognition occurs in two independent but connected codes: a verbal code for language and a nonverbal code for mental imagery. The mixed research literature on using pictures in teaching sight vocabulary is…

  17. Contrasting Five Different Theories of Letter Position Coding: Evidence from Orthographic Similarity Effects

    ERIC Educational Resources Information Center

    Davis, Colin J.; Bowers, Jeffrey S.

    2006-01-01

    Five theories of how letter position is coded are contrasted: position-specific slot-coding, Wickelcoding, open-bigram coding (discrete and continuous), and spatial coding. These theories make different predictions regarding the relative similarity of three different types of pairs of letter strings: substitution neighbors,…

  18. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  19. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  20. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  1. Alternative Fuels Data Center: Biodiesel Codes, Standards, and Safety

    Science.gov Websites

    Codes, Standards, and Safety to someone by E-mail Share Alternative Fuels Data Center: Biodiesel Codes, Standards, and Safety on Facebook Tweet about Alternative Fuels Data Center: Biodiesel Codes , Standards, and Safety on Twitter Bookmark Alternative Fuels Data Center: Biodiesel Codes, Standards, and

  2. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  3. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  4. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  5. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  6. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare Department of Health and Human Services ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  7. 45 CFR 162.1011 - Valid code sets.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...

  8. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...

  9. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...

  10. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...

  11. Code Mixing and Modernization across Cultures.

    ERIC Educational Resources Information Center

    Kamwangamalu, Nkonko M.

    A review of recent studies addressed the functional uses of code mixing across cultures. Expressions of code mixing (CM) are not random; in fact, a number of functions of code mixing can easily be delineated, for example, the concept of "modernization.""Modernization" is viewed with respect to how bilingual code mixers perceive…

  12. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1984-01-01

    Several error control coding techniques for reliable satellite communications were investigated to find algorithms for fast decoding of Reed-Solomon codes in terms of dual basis. The decoding of the (255,223) Reed-Solomon code, which is used as the outer code in the concatenated TDRSS decoder, was of particular concern.

  13. Theoretical Thermal Evaluation of Energy Recovery Incinerators

    DTIC Science & Technology

    1985-12-01

    Army Logistics Mgt Center, Fort Lee , VA DTIC Alexandria, VA DTNSRDC Code 4111 (R. Gierich), Bethesda MD; Code 4120, Annapolis, MD; Code 522 (Library...Washington. DC: Code (I6H4. Washington. DC NAVSECGRUACT PWO (Code .’^O.’^). Winter Harbor. IVIE ; PWO (Code 4(1). Edzell. Scotland; PWO. Adak AK...NEW YORK Fort Schuyler. NY (Longobardi) TEXAS A&M UNIVERSITY W.B. Ledbetter College Station. TX UNIVERSITY OF CALIFORNIA Energy Engineer. Davis CA

  14. Modeling and Simulation of a Non-Coherent Frequency Shift Keying Transceiver Using a Field Programmable Gate Array (FPGA)

    DTIC Science & Technology

    2008-09-01

    Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and

  15. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  16. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1989-01-01

    The performance of bandwidth efficient trellis codes on channels with phase jitter, or those disturbed by jamming and impulse noise is analyzed. An heuristic algorithm for construction of bandwidth efficient trellis codes with any constraint length up to about 30, any signal constellation, and any code rate was developed. Construction of good distance profile trellis codes for sequential decoding and comparison of random coding bounds of trellis coded modulation schemes are also discussed.

  17. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    ERIC Educational Resources Information Center

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  18. Survey of adaptive image coding techniques

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1977-01-01

    The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.

  19. Current and anticipated uses of thermal-hydraulic codes in NFI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsuda, K.; Takayasu, M.

    1997-07-01

    This paper presents the thermal-hydraulic codes currently used in NFI for the LWR fuel development and licensing application including transient and design basis accident analyses of LWR plants. The current status of the codes are described in the context of code capability, modeling feature, and experience of code application related to the fuel development and licensing. Finally, the anticipated use of the future thermal-hydraulic code in NFI is briefly given.

  20. Permanence analysis of a concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.; Kasami, T.

    1983-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.

Top