Sample records for x codes

  1. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  2. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    NASA Astrophysics Data System (ADS)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  3. Coded mask telescopes for X-ray astronomy

    NASA Astrophysics Data System (ADS)

    Skinner, G. K.; Ponman, T. J.

    1987-04-01

    The principle of the coded mask techniques are discussed together with the methods of image reconstruction. The coded mask telescopes built at the University of Birmingham, including the SL 1501 coded mask X-ray telescope flown on the Skylark rocket and the Coded Mask Imaging Spectrometer (COMIS) projected for the Soviet space station Mir, are described. A diagram of a coded mask telescope and some designs for coded masks are included.

  4. Long Non-coding RNAs in the X-inactivation Center

    PubMed Central

    Kalantry, Sundeep

    2014-01-01

    The X-inactivation center is a hotbed of functional long non-coding RNAs in eutherian mammals. These RNAs are thought to help orchestrate the epigenetic transcriptional states of the two X-chromosomes in females as well as of the single X-chromosome in males. To balance X-linked gene expression between the sexes, females undergo transcriptional silencing of most genes on one of the two X-chromosomes in a process termed X-chromosome inactivation. While one X-chromosome is inactivated, the other X-chromosome remains active. Moreover, with a few notable exceptions, the originally established epigenetic transcriptional profiles of the two is maintained as such through many rounds of cell division, essentially for the life of the organism. The stable divergent transcriptional fates of the two X-chromosomes, despite residing in a shared nucleoplasm, make X-inactivation a paradigm of epigenetic transcriptional regulation. Originally proposed in 1961 by Mary Lyon, the X-inactivation hypothesis has been validated through much experimentation over the last fifty years. In the last 25 years, the discovery and functional characterization has firmly established X-linked long non-coding RNAs as key players in choreographing X-chromosome inactivation. PMID:24297756

  5. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics

    PubMed Central

    Sowa, Katarzyna M.; Last, Arndt; Korecki, Paweł

    2017-01-01

    Polycapillary devices focus X-rays by means of multiple reflections of X-rays in arrays of bent glass capillaries. The size of the focal spot (typically 10–100 μm) limits the resolution of scanning, absorption and phase-contrast X-ray imaging using these devices. At the expense of a moderate resolution, polycapillary elements provide high intensity and are frequently used for X-ray micro-imaging with both synchrotrons and X-ray tubes. Recent studies have shown that the internal microstructure of such an optics can be used as a coded aperture that encodes high-resolution information about objects located inside the focal spot. However, further improvements to this variant of X-ray microscopy will require the challenging fabrication of tailored devices with a well-defined capillary microstructure. Here, we show that submicron coded aperture microscopy can be realized using a periodic grid that is placed at the output surface of a polycapillary optics. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific microstructure of the optics but rather takes advantage only of its focusing properties. Hence, submicron X-ray imaging can be realized with standard polycapillary devices and existing set-ups for micro X-ray fluorescence spectroscopy. PMID:28322316

  6. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Sowa, Katarzyna M; Last, Arndt; Korecki, Paweł

    2017-03-21

    Polycapillary devices focus X-rays by means of multiple reflections of X-rays in arrays of bent glass capillaries. The size of the focal spot (typically 10-100 μm) limits the resolution of scanning, absorption and phase-contrast X-ray imaging using these devices. At the expense of a moderate resolution, polycapillary elements provide high intensity and are frequently used for X-ray micro-imaging with both synchrotrons and X-ray tubes. Recent studies have shown that the internal microstructure of such an optics can be used as a coded aperture that encodes high-resolution information about objects located inside the focal spot. However, further improvements to this variant of X-ray microscopy will require the challenging fabrication of tailored devices with a well-defined capillary microstructure. Here, we show that submicron coded aperture microscopy can be realized using a periodic grid that is placed at the output surface of a polycapillary optics. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific microstructure of the optics but rather takes advantage only of its focusing properties. Hence, submicron X-ray imaging can be realized with standard polycapillary devices and existing set-ups for micro X-ray fluorescence spectroscopy.

  7. Origin and evolution of the long non-coding genes in the X-inactivation center.

    PubMed

    Romito, Antonio; Rougeulle, Claire

    2011-11-01

    Random X chromosome inactivation (XCI), the eutherian mechanism of X-linked gene dosage compensation, is controlled by a cis-acting locus termed the X-inactivation center (Xic). One of the striking features that characterize the Xic landscape is the abundance of loci transcribing non-coding RNAs (ncRNAs), including Xist, the master regulator of the inactivation process. Recent comparative genomic analyses have depicted the evolutionary scenario behind the origin of the X-inactivation center, revealing that this locus evolved from a region harboring protein-coding genes. During mammalian radiation, this ancestral protein-coding region was disrupted in the marsupial group, whilst it provided in eutherian lineage the starting material for the non-translated RNAs of the X-inactivation center. The emergence of non-coding genes occurred by a dual mechanism involving loss of protein-coding function of the pre-existing genes and integration of different classes of mobile elements, some of which modeled the structure and sequence of the non-coding genes in a species-specific manner. The rising genes started to produce transcripts that acquired function in regulating the epigenetic status of the X chromosome, as shown for Xist, its antisense Tsix, Jpx, and recently suggested for Ftx. Thus, the appearance of the Xic, which occurred after the divergence between eutherians and marsupials, was the basis for the evolution of random X inactivation as a strategy to achieve dosage compensation. Copyright © 2011. Published by Elsevier Masson SAS.

  8. The Maximal C³ Self-Complementary Trinucleotide Circular Code X in Genes of Bacteria, Archaea, Eukaryotes, Plasmids and Viruses.

    PubMed

    Michel, Christian J

    2017-04-18

    In 1996, a set X of 20 trinucleotides was identified in genes of both prokaryotes and eukaryotes which has on average the highest occurrence in reading frame compared to its two shifted frames. Furthermore, this set X has an interesting mathematical property as X is a maximal C 3 self-complementary trinucleotide circular code. In 2015, by quantifying the inspection approach used in 1996, the circular code X was confirmed in the genes of bacteria and eukaryotes and was also identified in the genes of plasmids and viruses. The method was based on the preferential occurrence of trinucleotides among the three frames at the gene population level. We extend here this definition at the gene level. This new statistical approach considers all the genes, i.e., of large and small lengths, with the same weight for searching the circular code X . As a consequence, the concept of circular code, in particular the reading frame retrieval, is directly associated to each gene. At the gene level, the circular code X is strengthened in the genes of bacteria, eukaryotes, plasmids, and viruses, and is now also identified in the genes of archaea. The genes of mitochondria and chloroplasts contain a subset of the circular code X . Finally, by studying viral genes, the circular code X was found in DNA genomes, RNA genomes, double-stranded genomes, and single-stranded genomes.

  9. Fragile X mental retardation protein participates in non-coding RNA pathways.

    PubMed

    Li, En-Hui; Zhao, Xin; Zhang, Ce; Liu, Wei

    2018-02-20

    Fragile X syndrome is one of the most common forms of inherited intellectual disability. It is caused by mutations of the Fragile X mental retardation 1(FMR1) gene, resulting in either the loss or abnormal expression of the Fragile X mental retardation protein (FMRP). Recent research showed that FMRP participates in non-coding RNA pathways and plays various important roles in physiology, thereby extending our knowledge of the pathogenesis of the Fragile X syndrome. Initial studies showed that the Drosophila FMRP participates in siRNA and miRNA pathways by interacting with Dicer, Ago1 and Ago2, involved in neural activity and the fate determination of the germline stem cells. Subsequent studies showed that the Drosophila FMRP participates in piRNA pathway by interacting with Aub, Ago1 and Piwi in the maintenance of normal chromatin structures and genomic stability. More recent studies showed that FMRP is associated with lncRNA pathway, suggesting a potential role for the involvement in the clinical manifestations. In this review, we summarize the novel findings and explore the relationship between FMRP and non-coding RNA pathways, particularly the piRNA pathway, thereby providing critical insights on the molecular pathogenesis of Fragile X syndrome, and potential translational applications in clinical management of the disease.

  10. The Maximal C3 Self-Complementary Trinucleotide Circular Code X in Genes of Bacteria, Archaea, Eukaryotes, Plasmids and Viruses

    PubMed Central

    Michel, Christian J.

    2017-01-01

    In 1996, a set X of 20 trinucleotides was identified in genes of both prokaryotes and eukaryotes which has on average the highest occurrence in reading frame compared to its two shifted frames. Furthermore, this set X has an interesting mathematical property as X is a maximal C3 self-complementary trinucleotide circular code. In 2015, by quantifying the inspection approach used in 1996, the circular code X was confirmed in the genes of bacteria and eukaryotes and was also identified in the genes of plasmids and viruses. The method was based on the preferential occurrence of trinucleotides among the three frames at the gene population level. We extend here this definition at the gene level. This new statistical approach considers all the genes, i.e., of large and small lengths, with the same weight for searching the circular code X. As a consequence, the concept of circular code, in particular the reading frame retrieval, is directly associated to each gene. At the gene level, the circular code X is strengthened in the genes of bacteria, eukaryotes, plasmids, and viruses, and is now also identified in the genes of archaea. The genes of mitochondria and chloroplasts contain a subset of the circular code X. Finally, by studying viral genes, the circular code X was found in DNA genomes, RNA genomes, double-stranded genomes, and single-stranded genomes. PMID:28420220

  11. Self-complementary circular codes in coding theory.

    PubMed

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  12. Simulating X-ray bursts with a radiation hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Seong, Gwangeon; Kwak, Kyujin

    2018-04-01

    Previous simulations of X-ray bursts (XRBs), for example, those performed by MESA (Modules for Experiments in Stellar Astrophysics) could not address the dynamical effects of strong radiation, which are important to explain the photospheric radius expansion (PRE) phenomena seen in many XRBs. In order to study the effects of strong radiation, we propose to use SNEC (the SuperNova Explosion Code), a 1D Lagrangian open source code that is designed to solve hydrodynamics and equilibrium-diffusion radiation transport together. Because SNEC is able to control modules of radiation-hydrodynamics for properly mapped inputs, radiation-dominant pressure occurring in PRE XRBs can be handled. Here we present simulation models for PRE XRBs by applying SNEC together with MESA.

  13. Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-06-16

    This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.

  14. Systematic Comparison of Photoionized Plasma Codes with Application to Spectroscopic Studies of AGN in X-Rays

    NASA Technical Reports Server (NTRS)

    Mehdipour, M.; Kaastra, J. S.; Kallman, T.

    2016-01-01

    Atomic data and plasma models play a crucial role in the diagnosis and interpretation of astrophysical spectra, thus influencing our understanding of the Universe. In this investigation we present a systematic comparison of the leading photoionization codes to determine how much their intrinsic differences impact X-ray spectroscopic studies of hot plasmas in photoionization equilibrium. We carry out our computations using the Cloudy, SPEX, and XSTAR photoionization codes, and compare their derived thermal and ionization states for various ionizing spectral energy distributions. We examine the resulting absorption-line spectra from these codes for the case of ionized outflows in active galactic nuclei. By comparing the ionic abundances as a function of ionization parameter, we find that on average there is about 30 deviation between the codes in where ionic abundances peak. For H-like to B-like sequence ions alone, this deviation in is smaller at about 10 on average. The comparison of the absorption-line spectra in the X-ray band shows that there is on average about 30 deviation between the codes in the optical depth of the lines produced at log 1 to 2, reducing to about 20 deviation at log 3. We also simulate spectra of the ionized outflows with the current and upcoming high-resolution X-ray spectrometers, on board XMM-Newton, Chandra, Hitomi, and Athena. From these simulations we obtain the deviation on the best-fit model parameters, arising from the use of different photoionization codes, which is about 10 to40. We compare the modeling uncertainties with the observational uncertainties from the simulations. The results highlight the importance of continuous development and enhancement of photoionization codes for the upcoming era of X-ray astronomy with Athena.

  15. Development of a Coded Aperture X-Ray Backscatter Imager for Explosive Device Detection

    NASA Astrophysics Data System (ADS)

    Faust, Anthony A.; Rothschild, Richard E.; Leblanc, Philippe; McFee, John Elton

    2009-02-01

    Defence R&D Canada has an active research and development program on detection of explosive devices using nuclear methods. One system under development is a coded aperture-based X-ray backscatter imaging detector designed to provide sufficient speed, contrast and spatial resolution to detect antipersonnel landmines and improvised explosive devices. The successful development of a hand-held imaging detector requires, among other things, a light-weight, ruggedized detector with low power requirements, supplying high spatial resolution. The University of California, San Diego-designed HEXIS detector provides a modern, large area, high-temperature CZT imaging surface, robustly packaged in a light-weight housing with sound mechanical properties. Based on the potential for the HEXIS detector to be incorporated as the detection element of a hand-held imaging detector, the authors initiated a collaborative effort to demonstrate the capability of a coded aperture-based X-ray backscatter imaging detector. This paper will discuss the landmine and IED detection problem and review the coded aperture technique. Results from initial proof-of-principle experiments will then be reported.

  16. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).

  17. Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Korecki, P; Roszczynialski, T P; Sowa, K M

    2015-04-06

    In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.

  18. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  19. C2x: A tool for visualisation and input preparation for CASTEP and other electronic structure codes

    NASA Astrophysics Data System (ADS)

    Rutter, M. J.

    2018-04-01

    The c2x code fills two distinct roles. Its first role is in acting as a converter between the binary format .check files from the widely-used CASTEP [1] electronic structure code and various visualisation programs. Its second role is to manipulate and analyse the input and output files from a variety of electronic structure codes, including CASTEP, ONETEP and VASP, as well as the widely-used 'Gaussian cube' file format. Analysis includes symmetry analysis, and manipulation arbitrary cell transformations. It continues to be under development, with growing functionality, and is written in a form which would make it easy to extend it to working directly with files from other electronic structure codes. Data which c2x is capable of extracting from CASTEP's binary checkpoint files include charge densities, spin densities, wavefunctions, relaxed atomic positions, forces, the Fermi level, the total energy, and symmetry operations. It can recreate .cell input files from checkpoint files. Volumetric data can be output in formats useable by many common visualisation programs, and c2x will itself calculate integrals, expand data into supercells, and interpolate data via combinations of Fourier and trilinear interpolation. It can extract data along arbitrary lines (such as lines between atoms) as 1D output. C2x is able to convert between several common formats for describing molecules and crystals, including the .cell format of CASTEP. It can construct supercells, reduce cells to their primitive form, and add specified k-point meshes. It uses the spglib library [2] to report symmetry information, which it can add to .cell files. C2x is a command-line utility, so is readily included in scripts. It is available under the GPL and can be obtained from http://www.c2x.org.uk. It is believed to be the only open-source code which can read CASTEP's .check files, so it will have utility in other projects.

  20. More box codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    A new investigation shows that, starting from the BCH (21,15;3) code represented as a 7 x 3 matrix and adding a row and column to add even parity, one obtains an 8 x 4 matrix (32,15;8) code. An additional dimension is obtained by specifying odd parity on the rows and even parity on the columns, i.e., adjoining to the 8 x 4 matrix, the matrix, which is zero except for the fourth column (of all ones). Furthermore, any seven rows and three columns will form the BCH (21,15;3) code. This box code has the same weight structure as the quadratic residue and BCH codes of the same dimensions. Whether there exists an algebraic isomorphism to either code is as yet unknown.

  1. Design and performance of coded aperture optical elements for the CESR-TA x-ray beam size monitor

    NASA Astrophysics Data System (ADS)

    Alexander, J. P.; Chatterjee, A.; Conolly, C.; Edwards, E.; Ehrlichman, M. P.; Flanagan, J. W.; Fontes, E.; Heltsley, B. K.; Lyndaker, A.; Peterson, D. P.; Rider, N. T.; Rubin, D. L.; Seeley, R.; Shanks, J.

    2014-12-01

    We describe the design and performance of optical elements for an x-ray beam size monitor (xBSM), a device measuring e+ and e- beam sizes in the CESR-TA storage ring. The device can measure vertical beam sizes of 10 - 100 μm on a turn-by-turn, bunch-by-bunch basis at e± beam energies of 2 - 5 GeV. x-rays produced by a hard-bend magnet pass through a single- or multiple-slit (coded aperture) optical element onto a detector. The coded aperture slit pattern and thickness of masking material forming that pattern can both be tuned for optimal resolving power. We describe several such optical elements and show how well predictions of simple models track measured performances.

  2. Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code

    DOE PAGES

    Mendis, Charith; Bosboom, Jeffrey; Wu, Kevin; ...

    2015-06-03

    Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide's state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regeneratemore » the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. Here, we abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75 % performance improvement, four kernels from Irfan View, leading to 4.97 x performance, and one stencil from the mini GMG multigrid benchmark netting a 4.25 x improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop's filters with our lifted implementations, giving 1.12 x speedup without affecting the user experience.« less

  3. Differentiation-dependent Requirement of Tsix long non-coding RNA in Imprinted X-chromosome Inactivation

    PubMed Central

    Maclary, Emily; Buttigieg, Emily; Hinten, Michael; Gayen, Srimonta; Harris, Clair; Sarkar, Mrinal Kumar; Purushothaman, Sonya; Kalantry, Sundeep

    2014-01-01

    Imprinted X-inactivation is a paradigm of mammalian transgenerational epigenetic regulation resulting in silencing of genes on the paternally-inherited X-chromosome. The pre-programmed fate of the X-chromosomes is thought to be controlled in cis by the parent-of-origin-specific expression of two long non-coding RNAs, Tsix and Xist, in mice. Exclusive expression of Tsix from the maternal–X has implicated it as the instrument through which the maternal germline prevents inactivation of the maternal–X in the offspring. Here, we show that Tsix is dispensable for inhibiting Xist and X-inactivation in the early embryo and in cultured stem cells of extra-embryonic lineages. Tsix is instead required to prevent Xist expression as trophectodermal progenitor cells differentiate. Despite induction of wild-type Xist RNA and accumulation of histone H3-K27me3, many Tsix-mutant X-chromosomes fail to undergo ectopic X-inactivation. We propose a novel model of lncRNA function in imprinted X-inactivation that may also apply to other genomically imprinted loci. PMID:24979243

  4. Imprinted and X-linked non-coding RNAs as potential regulators of human placental function

    PubMed Central

    Buckberry, Sam; Bianco-Miotto, Tina; Roberts, Claire T

    2014-01-01

    Pregnancy outcome is inextricably linked to placental development, which is strictly controlled temporally and spatially through mechanisms that are only partially understood. However, increasing evidence suggests non-coding RNAs (ncRNAs) direct and regulate a considerable number of biological processes and therefore may constitute a previously hidden layer of regulatory information in the placenta. Many ncRNAs, including both microRNAs and long non-coding transcripts, show almost exclusive or predominant expression in the placenta compared with other somatic tissues and display altered expression patterns in placentas from complicated pregnancies. In this review, we explore the results of recent genome-scale and single gene expression studies using human placental tissue, but include studies in the mouse where human data are lacking. Our review focuses on the ncRNAs epigenetically regulated through genomic imprinting or X-chromosome inactivation and includes recent evidence surrounding the H19 lincRNA, the imprinted C19MC cluster microRNAs, and X-linked miRNAs associated with pregnancy complications. PMID:24081302

  5. X-ray backscatter radiography with lower open fraction coded masks

    NASA Astrophysics Data System (ADS)

    Muñoz, André A. M.; Vella, Anna; Healy, Matthew J. F.; Lane, David W.; Jupp, Ian; Lockley, David

    2017-09-01

    Single sided radiographic imaging would find great utility for medical, aerospace and security applications. While coded apertures can be used to form such an image from backscattered X-rays they suffer from near field limitations that introduce noise. Several theoretical studies have indicated that for an extended source the images signal to noise ratio may be optimised by using a low open fraction (<0.5) mask. However, few experimental results have been published for such low open fraction patterns and details of their formulation are often unavailable or are ambiguous. In this paper we address this process for two types of low open fraction mask, the dilute URA and the Singer set array. For the dilute URA the procedure for producing multiple 2D array patterns from given 1D binary sequences (Barker codes) is explained. Their point spread functions are calculated and their imaging properties are critically reviewed. These results are then compared to those from the Singer set and experimental exposures are presented for both type of pattern; their prospects for near field imaging are discussed.

  6. 3D-printed coded apertures for x-ray backscatter radiography

    NASA Astrophysics Data System (ADS)

    Muñoz, André A. M.; Vella, Anna; Healy, Matthew J. F.; Lane, David W.; Jupp, Ian; Lockley, David

    2017-09-01

    Many different mask patterns can be used for X-ray backscatter imaging using coded apertures, which can find application in the medical, industrial and security sectors. While some of these patterns may be considered to have a self-supporting structure, this is not the case for some of the most frequently used patterns such as uniformly redundant arrays or any pattern with a high open fraction. This makes mask construction difficult and usually requires a compromise in its design by drilling holes or adopting a no two holes touching version of the original pattern. In this study, this compromise was avoided by 3D printing a support structure that was then filled with a radiopaque material to create the completed mask. The coded masks were manufactured using two different methods, hot cast and cold cast. Hot casting involved casting a bismuth alloy at 80°C into the 3D printed acrylonitrile butadiene styrene mould which produced an absorber with density of 8.6 g cm-3. Cold casting was undertaken at room temperature, when a tungsten/epoxy composite was cast into a 3D printed polylactic acid mould. The cold cast procedure offered a greater density of around 9.6 to 10 g cm-3 and consequently greater X-ray attenuation. It was also found to be much easier to manufacture and more cost effective. A critical review of the manufacturing procedure is presented along with some typical images. In both cases the 3D printing process allowed square apertures to be created avoiding their approximation by circular holes when conventional drilling is used.

  7. Approximated transport-of-intensity equation for coded-aperture x-ray phase-contrast imaging.

    PubMed

    Das, Mini; Liang, Zhihua

    2014-09-15

    Transport-of-intensity equations (TIEs) allow better understanding of image formation and assist in simplifying the "phase problem" associated with phase-sensitive x-ray measurements. In this Letter, we present for the first time to our knowledge a simplified form of TIE that models x-ray differential phase-contrast (DPC) imaging with coded-aperture (CA) geometry. The validity of our approximation is demonstrated through comparison with an exact TIE in numerical simulations. The relative contributions of absorption, phase, and differential phase to the acquired phase-sensitive intensity images are made readily apparent with the approximate TIE, which may prove useful for solving the inverse phase-retrieval problem associated with these CA geometry based DPC.

  8. Bijective transformation circular codes and nucleotide exchanging RNA transcription.

    PubMed

    Michel, Christian J; Seligmann, Hervé

    2014-04-01

    The C(3) self-complementary circular code X identified in genes of prokaryotes and eukaryotes is a set of 20 trinucleotides enabling reading frame retrieval and maintenance, i.e. a framing code (Arquès and Michel, 1996; Michel, 2012, 2013). Some mitochondrial RNAs correspond to DNA sequences when RNA transcription systematically exchanges between nucleotides (Seligmann, 2013a,b). We study here the 23 bijective transformation codes ΠX of X which may code nucleotide exchanging RNA transcription as suggested by this mitochondrial observation. The 23 bijective transformation codes ΠX are C(3) trinucleotide circular codes, seven of them are also self-complementary. Furthermore, several correlations are observed between the Reading Frame Retrieval (RFR) probability of bijective transformation codes ΠX and the different biological properties of ΠX related to their numbers of RNAs in GenBank's EST database, their polymerization rate, their number of amino acids and the chirality of amino acids they code. Results suggest that the circular code X with the functions of reading frame retrieval and maintenance in regular RNA transcription, may also have, through its bijective transformation codes ΠX, the same functions in nucleotide exchanging RNA transcription. Associations with properties such as amino acid chirality suggest that the RFR of X and its bijective transformations molded the origins of the genetic code's machinery. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Method for measuring the focal spot size of an x-ray tube using a coded aperture mask and a digital detector.

    PubMed

    Russo, Paolo; Mettivier, Giovanni

    2011-04-01

    The goal of this study is to evaluate a new method based on a coded aperture mask combined with a digital x-ray imaging detector for measurements of the focal spot sizes of diagnostic x-ray tubes. Common techniques for focal spot size measurements employ a pinhole camera, a slit camera, or a star resolution pattern. The coded aperture mask is a radiation collimator consisting of a large number of apertures disposed on a predetermined grid in an array, through which the radiation source is imaged onto a digital x-ray detector. The method of the coded mask camera allows one to obtain a one-shot accurate and direct measurement of the two dimensions of the focal spot (like that for a pinhole camera) but at a low tube loading (like that for a slit camera). A large number of small apertures in the coded mask operate as a "multipinhole" with greater efficiency than a single pinhole, but keeping the resolution of a single pinhole. X-ray images result from the multiplexed output on the detector image plane of such a multiple aperture array, and the image of the source is digitally reconstructed with a deconvolution algorithm. Images of the focal spot of a laboratory x-ray tube (W anode: 35-80 kVp; focal spot size of 0.04 mm) were acquired at different geometrical magnifications with two different types of digital detector (a photon counting hybrid silicon pixel detector with 0.055 mm pitch and a flat panel CMOS digital detector with 0.05 mm pitch) using a high resolution coded mask (type no-two-holes-touching modified uniformly redundant array) with 480 0.07 mm apertures, designed for imaging at energies below 35 keV. Measurements with a slit camera were performed for comparison. A test with a pinhole camera and with the coded mask on a computed radiography mammography unit with 0.3 mm focal spot was also carried out. The full width at half maximum focal spot sizes were obtained from the line profiles of the decoded images, showing a focal spot of 0.120 mm x 0.105 mm at 35

  10. Multiple Trellis Coded Modulation (MTCM): An MSAT-X report

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.

    1986-01-01

    Conventional trellis coding outputs one channel symbol per trellis branch. The notion of multiple trellis coding is introduced wherein more than one channel symbol per trellis branch is transmitted. It is shown that the combination of multiple trellis coding with M-ary modulation yields a performance gain with symmetric signal set comparable to that previously achieved only with signal constellation asymmetry. The advantage of multiple trellis coding over the conventional trellis coded asymmetric modulation technique is that the potential for code catastrophe associated with the latter has been eliminated with no additional cost in complexity (as measured by the number of states in the trellis diagram).

  11. Simulation of Targets Feeding Pipe Rupture in Wendelstein 7-X Facility Using RELAP5 and COCOSYS Codes

    NASA Astrophysics Data System (ADS)

    Kaliatka, T.; Povilaitis, M.; Kaliatka, A.; Urbonavicius, E.

    2012-10-01

    Wendelstein nuclear fusion device W7-X is a stellarator type experimental device, developed by Max Planck Institute of plasma physics. Rupture of one of the 40 mm inner diameter coolant pipes providing water for the divertor targets during the "baking" regime of the facility operation is considered to be the most severe accident in terms of the plasma vessel pressurization. "Baking" regime is the regime of the facility operation during which plasma vessel structures are heated to the temperature acceptable for the plasma ignition in the vessel. This paper presents the model of W7-X cooling system (pumps, valves, pipes, hydro-accumulators, and heat exchangers), developed using thermal-hydraulic state-of-the-art RELAP5 Mod3.3 code, and model of plasma vessel, developed by employing the lumped-parameter code COCOSYS. Using both models the numerical simulation of processes in W7-X cooling system and plasma vessel has been performed. The results of simulation showed, that the automatic valve closure time 1 s is the most acceptable (no water hammer effect occurs) and selected area of the burst disk is sufficient to prevent pressure in the plasma vessel.

  12. Monte Carlo Simulation of X-Ray Spectra in Mammography and Contrast-Enhanced Digital Mammography Using the Code PENELOPE

    NASA Astrophysics Data System (ADS)

    Cunha, Diego M.; Tomal, Alessandra; Poletti, Martin E.

    2013-04-01

    In this work, the Monte Carlo (MC) code PENELOPE was employed for simulation of x-ray spectra in mammography and contrast-enhanced digital mammography (CEDM). Spectra for Mo, Rh and W anodes were obtained for tube potentials between 24-36 kV, for mammography, and between 45-49 kV, for CEDM. The spectra obtained from the simulations were analytically filtered to correspond to the anode/filter combinations usually employed in each technique (Mo/Mo, Rh/Rh and W/Rh for mammography and Mo/Cu, Rh/Cu and W/Cu for CEDM). For the Mo/Mo combination, the simulated spectra were compared with those obtained experimentally, and for spectra for the W anode, with experimental data from the literature, through comparison of distribution shape, average energies, half-value layers (HVL) and transmission curves. For all combinations evaluated, the simulated spectra were also compared with those provided by different models from the literature. Results showed that the code PENELOPE provides mammographic x-ray spectra in good agreement with those experimentally measured and those from the literature. The differences in the values of HVL ranged between 2-7%, for anode/filter combinations and tube potentials employed in mammography, and they were less than 5% for those employed in CEDM. The transmission curves for the spectra obtained also showed good agreement compared to those computed from reference spectra, with average relative differences less than 12% for mammography and CEDM. These results show that the code PENELOPE can be a useful tool to generate x-ray spectra for studies in mammography and CEDM, and also for evaluation of new x-ray tube designs and new anode materials.

  13. Imaging Analysis of the Hard X-Ray Telescope ProtoEXIST2 and New Techniques for High-Resolution Coded-Aperture Telescopes

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.

    2016-01-01

    Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.

  14. Differential expression of non-coding RNAs and continuous evolution of the X chromosome in testicular transcriptome of two mouse species.

    PubMed

    Homolka, David; Ivanek, Robert; Forejt, Jiri; Jansa, Petr

    2011-02-14

    Tight regulation of testicular gene expression is a prerequisite for male reproductive success, while differentiation of gene activity in spermatogenesis is important during speciation. Thus, comparison of testicular transcriptomes between closely related species can reveal unique regulatory patterns and shed light on evolutionary constraints separating the species. Here, we compared testicular transcriptomes of two closely related mouse species, Mus musculus and Mus spretus, which diverged more than one million years ago. We analyzed testicular expression using tiling arrays overlapping Chromosomes 2, X, Y and mitochondrial genome. An excess of differentially regulated non-coding RNAs was found on Chromosome 2 including the intronic antisense RNAs, intergenic RNAs and premature forms of Piwi-interacting RNAs (piRNAs). Moreover, striking difference was found in the expression of X-linked G6pdx gene, the parental gene of the autosomal retrogene G6pd2. The prevalence of non-coding RNAs among differentially expressed transcripts indicates their role in species-specific regulation of spermatogenesis. The postmeiotic expression of G6pdx in Mus spretus points towards the continuous evolution of X-chromosome silencing and provides an example of expression change accompanying the out-of-the X-chromosomal retroposition.

  15. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  16. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  17. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  18. Finite-beta equilibria for Wendelstein 7-X configurations using the Princeton Iterative Equilibrium Solver code

    NASA Astrophysics Data System (ADS)

    Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.

    1999-04-01

    Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.

  19. Performance tuning of N-body codes on modern microprocessors: I. Direct integration with a hermite scheme on x86_64 architecture

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Makino, Junichiro; Hut, Piet

    2006-12-01

    The main performance bottleneck of gravitational N-body codes is the force calculation between two particles. We have succeeded in speeding up this pair-wise force calculation by factors between 2 and 10, depending on the code and the processor on which the code is run. These speed-ups were obtained by writing highly fine-tuned code for x86_64 microprocessors. Any existing N-body code, running on these chips, can easily incorporate our assembly code programs. In the current paper, we present an outline of our overall approach, which we illustrate with one specific example: the use of a Hermite scheme for a direct N2 type integration on a single 2.0 GHz Athlon 64 processor, for which we obtain an effective performance of 4.05 Gflops, for double-precision accuracy. In subsequent papers, we will discuss other variations, including the combinations of N log N codes, single-precision implementations, and performance on other microprocessors.

  20. Multitasking the three-dimensional shock wave code CTH on the Cray X-MP/416

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGlaun, J.M.; Thompson, S.L.

    1988-01-01

    CTH is a software system under development at Sandia National Laboratories Albuquerque that models multidimensional, multi-material, large-deformation, strong shock wave physics. CTH was carefully designed to both vectorize and multitask on the Cray X-MP/416. All of the physics routines are vectorized except the thermodynamics and the interface tracer. All of the physics routines are multitasked except the boundary conditions. The Los Alamos National Laboratory multitasking library was used for the multitasking. The resulting code is easy to maintain, easy to understand, gives the same answers as the unitasked code, and achieves a measured speedup of approximately 3.5 on the fourmore » cpu Cray. This document discusses the design, prototyping, development, and debugging of CTH. It also covers the architecture features of CTH that enhances multitasking, granularity of the tasks, and synchronization of tasks. The utility of system software and utilities such as simulators and interactive debuggers are also discussed. 5 refs., 7 tabs.« less

  1. The “2T” ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jim Michael

    2016-10-05

    This report documents an effort to generate the semi-analytic "2T" ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie, and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit withinmore » xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent "2T" ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.« less

  2. Differential Expression of Non-Coding RNAs and Continuous Evolution of the X Chromosome in Testicular Transcriptome of Two Mouse Species

    PubMed Central

    Homolka, David; Ivanek, Robert; Forejt, Jiri; Jansa, Petr

    2011-01-01

    Background Tight regulation of testicular gene expression is a prerequisite for male reproductive success, while differentiation of gene activity in spermatogenesis is important during speciation. Thus, comparison of testicular transcriptomes between closely related species can reveal unique regulatory patterns and shed light on evolutionary constraints separating the species. Methodology/Principal Findings Here, we compared testicular transcriptomes of two closely related mouse species, Mus musculus and Mus spretus, which diverged more than one million years ago. We analyzed testicular expression using tiling arrays overlapping Chromosomes 2, X, Y and mitochondrial genome. An excess of differentially regulated non-coding RNAs was found on Chromosome 2 including the intronic antisense RNAs, intergenic RNAs and premature forms of Piwi-interacting RNAs (piRNAs). Moreover, striking difference was found in the expression of X-linked G6pdx gene, the parental gene of the autosomal retrogene G6pd2. Conclusions/Significance The prevalence of non-coding RNAs among differentially expressed transcripts indicates their role in species-specific regulation of spermatogenesis. The postmeiotic expression of G6pdx in Mus spretus points towards the continuous evolution of X-chromosome silencing and provides an example of expression change accompanying the out-of-the X-chromosomal retroposition. PMID:21347268

  3. Towers of generalized divisible quantum codes

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  4. n-Nucleotide circular codes in graph theory.

    PubMed

    Fimmel, Elena; Michel, Christian J; Strüngmann, Lutz

    2016-03-13

    The circular code theory proposes that genes are constituted of two trinucleotide codes: the classical genetic code with 61 trinucleotides for coding the 20 amino acids (except the three stop codons {TAA,TAG,TGA}) and a circular code based on 20 trinucleotides for retrieving, maintaining and synchronizing the reading frame. It relies on two main results: the identification of a maximal C(3) self-complementary trinucleotide circular code X in genes of bacteria, eukaryotes, plasmids and viruses (Michel 2015 J. Theor. Biol. 380, 156-177. (doi:10.1016/j.jtbi.2015.04.009); Arquès & Michel 1996 J. Theor. Biol. 182, 45-58. (doi:10.1006/jtbi.1996.0142)) and the finding of X circular code motifs in tRNAs and rRNAs, in particular in the ribosome decoding centre (Michel 2012 Comput. Biol. Chem. 37, 24-37. (doi:10.1016/j.compbiolchem.2011.10.002); El Soufi & Michel 2014 Comput. Biol. Chem. 52, 9-17. (doi:10.1016/j.compbiolchem.2014.08.001)). The univerally conserved nucleotides A1492 and A1493 and the conserved nucleotide G530 are included in X circular code motifs. Recently, dinucleotide circular codes were also investigated (Michel & Pirillo 2013 ISRN Biomath. 2013, 538631. (doi:10.1155/2013/538631); Fimmel et al. 2015 J. Theor. Biol. 386, 159-165. (doi:10.1016/j.jtbi.2015.08.034)). As the genetic motifs of different lengths are ubiquitous in genes and genomes, we introduce a new approach based on graph theory to study in full generality n-nucleotide circular codes X, i.e. of length 2 (dinucleotide), 3 (trinucleotide), 4 (tetranucleotide), etc. Indeed, we prove that an n-nucleotide code X is circular if and only if the corresponding graph [Formula: see text] is acyclic. Moreover, the maximal length of a path in [Formula: see text] corresponds to the window of nucleotides in a sequence for detecting the correct reading frame. Finally, the graph theory of tournaments is applied to the study of dinucleotide circular codes. It has full equivalence between the combinatorics

  5. DROP: Detecting Return-Oriented Programming Malicious Code

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Xiao, Hai; Shen, Xiaobin; Yin, Xinchun; Mao, Bing; Xie, Li

    Return-Oriented Programming (ROP) is a new technique that helps the attacker construct malicious code mounted on x86/SPARC executables without any function call at all. Such technique makes the ROP malicious code contain no instruction, which is different from existing attacks. Moreover, it hides the malicious code in benign code. Thus, it circumvents the approaches that prevent control flow diversion outside legitimate regions (such as W ⊕ X ) and most malicious code scanning techniques (such as anti-virus scanners). However, ROP has its own intrinsic feature which is different from normal program design: (1) uses short instruction sequence ending in "ret", which is called gadget, and (2) executes the gadgets contiguously in specific memory space, such as standard GNU libc. Based on the features of the ROP malicious code, in this paper, we present a tool DROP, which is focused on dynamically detecting ROP malicious code. Preliminary experimental results show that DROP can efficiently detect ROP malicious code, and have no false positives and negatives.

  6. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  7. A STUDY OF THE X-RAYED OUTFLOW OF APM 08279+5255 THROUGH PHOTOIONIZATION CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saez, Cristian; Chartas, George, E-mail: saez@astro.psu.edu, E-mail: chartasg@cofc.edu

    2011-08-20

    We present new results from our study of the X-rayed outflow of the z = 3.91 gravitationally lensed broad absorption line quasar APM 08279+5255. These results are based on spectral fits to all the long exposure observations of APM 08279+5255 using a new quasar-outflow model. This model is based on CLOUDY{sup 3} CLOUDY is a photoionization code designed to simulate conditions in interstellar matter under a broad range of conditions. We have used version 08.00 of the code last described by Ferland et al. (1998). The atomic database used by CLOUDY is described in Ferguson et al. (2001) and http://www.pa.uky.edu/{approx}verner/atom.html.more » simulations of a near-relativistic quasar outflow. The main conclusions from our multi-epoch spectral re-analysis of Chandra, XMM-Newton, and Suzaku observations of APM 08279+5255 are the following. (1) In every observation, we confirm the presence of two strong features, one at rest-frame energies between 1-4 keV and the other between 7-18 keV. (2) We confirm that the low-energy absorption (1-4 keV rest frame) arises from a low-ionization absorber with log(N{sub H}/cm{sup -2}) {approx} 23 and the high-energy absorption (7-18 keV rest frame) arises from highly ionized (3 {approx}< log {xi} {approx}< 4, where {xi} is the ionization parameter) iron in a near-relativistic outflowing wind. Assuming this interpretation, we find that the velocities on the outflow could get up to {approx}0.7c. (3) We confirm a correlation between the maximum outflow velocity and the photon index and find possible trends between the maximum outflow velocity and the X-ray luminosity, and between the total column density and the photon index. We performed calculations of the force multipliers of material illuminated by absorbed power laws and a Mathews-Ferland spectral energy distribution (SED). We found that variations of the X-ray and UV parts of the SEDs and the presence of a moderate absorbing shield will produce important changes in the strength of

  8. DEVELOPMENT OF ASME SECTION X CODE RULES FOR HIGH PRESSURE COMPOSITE HYDROGEN PRESSURE VESSELS WITH NON-LOAD SHARING LINERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawls, G.; Newhouse, N.; Rana, M.

    2010-04-13

    The Boiler and Pressure Vessel Project Team on Hydrogen Tanks was formed in 2004 to develop Code rules to address the various needs that had been identified for the design and construction of up to 15000 psi hydrogen storage vessel. One of these needs was the development of Code rules for high pressure composite vessels with non-load sharing liners for stationary applications. In 2009, ASME approved new Appendix 8, for Section X Code which contains the rules for these vessels. These vessels are designated as Class III vessels with design pressure ranging from 20.7 MPa (3,000 ps)i to 103.4 MPamore » (15,000 psi) and maximum allowable outside liner diameter of 2.54 m (100 inches). The maximum design life of these vessels is limited to 20 years. Design, fabrication, and examination requirements have been specified, included Acoustic Emission testing at time of manufacture. The Code rules include the design qualification testing of prototype vessels. Qualification includes proof, expansion, burst, cyclic fatigue, creep, flaw, permeability, torque, penetration, and environmental testing.« less

  9. Finite-beta equilibria for Wendelstein 7-X configurations using the Princeton Iterative Equilibrium Solver code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arndt, S.; Merkel, P.; Monticello, D.A.

    Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz {ital et al.}, {ital Plasma Physics and Controlled Nuclear Fusion Research 1990} (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman {ital et al.}, Comput. Phys. Commun., {bold 43}, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations neededmore » for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann {ital et al.}, Phys. Fluids {bold 26}, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of {open_quotes}self-healing{close_quotes} of islands has been observed. {copyright} {ital 1999 American Institute of Physics.}« less

  10. Unitary circular code motifs in genomes of eukaryotes.

    PubMed

    El Soufi, Karim; Michel, Christian J

    A set X of 20 trinucleotides was identified in genes of bacteria, eukaryotes, plasmids and viruses, which has in average the highest occurrence in reading frame compared to its two shifted frames (Michel, 2015; Arquès and Michel, 1996). This set X has an interesting mathematical property as X is a circular code (Arquès and Michel, 1996). Thus, the motifs from this circular code X, called X motifs, have the property to always retrieve, synchronize and maintain the reading frame in genes. The origin of this circular code X in genes is an open problem since its discovery in 1996. Here, we first show that the unitary circular codes (UCC), i.e. sets of one word, allow to generate unitary circular code motifs (UCC motifs), i.e. a concatenation of the same motif (simple repeats) leading to low complexity DNA. Three classes of UCC motifs are studied here: repeated dinucleotides (D + motifs), repeated trinucleotides (T + motifs) and repeated tetranucleotides (T + motifs). Thus, the D + , T + and T + motifs allow to retrieve, synchronize and maintain a frame modulo 2, modulo 3 and modulo 4, respectively, and their shifted frames (1 modulo 2; 1 and 2 modulo 3; 1, 2 and 3 modulo 4 according to the C 2 , C 3 and C 4 properties, respectively) in the DNA sequences. The statistical distribution of the D + , T + and T + motifs is analyzed in the genomes of eukaryotes. A UCC motif and its comp lementary UCC motif have the same distribution in the eukaryotic genomes. Furthermore, a UCC motif and its complementary UCC motif have increasing occurrences contrary to their number of hydrogen bonds, very significant with the T + motifs. The longest D + , T + and T + motifs in the studied eukaryotic genomes are also given. Surprisingly, a scarcity of repeated trinucleotides (T + motifs) in the large eukaryotic genomes is observed compared to the D + and T + motifs. This result has been investigated and may be explained by two outcomes. Repeated trinucleotides (T + motifs) are identified

  11. Shielding properties of 80TeO2-5TiO2-(15-x) WO3-xAnOm glasses using WinXCom and MCNP5 code

    NASA Astrophysics Data System (ADS)

    Dong, M. G.; El-Mallawany, R.; Sayyed, M. I.; Tekin, H. O.

    2017-12-01

    Gamma ray shielding properties of 80TeO2-5TiO2-(15-x) WO3-xAnOm glasses, where AnOm is Nb2O5 = 0.01, 5, Nd2O3 = 3, 5 and Er2O3 = 5 mol% have been achieved. Shielding parameters; mass attenuation coefficients, half value layers, and macroscopic effective removal cross section for fast neutrons have been computed by using WinXCom program and MCNP5 Monte Carlo code. In addition, by using Geometric Progression method (G-P), exposure buildup factor values were also calculated. Variations of shielding parameters are discussed for the effect of REO addition into the glasses and photon energy.

  12. Definite Integrals, Some Involving Residue Theory Evaluated by Maple Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Kimiko o

    2010-01-01

    The calculus of residue is applied to evaluate certain integrals in the range (-{infinity} to {infinity}) using the Maple symbolic code. These integrals are of the form {integral}{sub -{infinity}}{sup {infinity}} cos(x)/[(x{sup 2} + a{sup 2})(x{sup 2} + b{sup 2}) (x{sup 2} + c{sup 2})]dx and similar extensions. The Maple code is also applied to expressions in maximum likelihood estimator moments when sampling from the negative binomial distribution. In general the Maple code approach to the integrals gives correct answers to specified decimal places, but the symbolic result may be extremely long and complex.

  13. Improvements of the Radiation Code "MstrnX" in AORI/NIES/JAMSTEC Models

    NASA Astrophysics Data System (ADS)

    Sekiguchi, M.; Suzuki, K.; Takemura, T.; Watanabe, M.; Ogura, T.

    2015-12-01

    There is a large demand for an accurate yet rapid radiation transfer scheme accurate for general climate models. The broadband radiative transfer code "mstrnX", ,which was developed by Atmosphere and Ocean Research Institute (AORI) and was implemented in several global and regional climate models cooperatively developed in the Japanese research community, for example, MIROC (the Model for Interdisciplinary Research on Climate) [Watanabe et al., 2010], NICAM (Non-hydrostatic Icosahedral Atmospheric Model) [Satoh et al, 2008], and CReSS (Cloud Resolving Storm Simulator) [Tsuboki and Sakakibara, 2002]. In this study, we improve the gas absorption process and the scattering process of ice particles. For update of gas absorption process, the absorption line database is replaced by the latest versions of the Harvard-Smithsonian Center, HITRAN2012. An optimization method is adopted in mstrnX to decrease the number of integration points for the wavenumber integration using the correlated k-distribution method and to increase the computational efficiency in each band. The integration points and weights of the correlated k-distribution are optimized for accurate calculation of the heating rate up to altitude of 70 km. For this purpose we adopted a new non-linear optimization method of the correlated k-distribution and studied an optimal initial condition and the cost function for the non-linear optimization. It is known that mstrnX has a considerable bias in case of quadrapled carbon dioxide concentrations [Pincus et al., 2015], however, the bias is decreased by this improvement. For update of scattering process of ice particles, we adopt a solid column as an ice crystal habit [Yang et al., 2013]. The single scattering properties are calculated and tabulated in advance. The size parameter of this table is ranged from 0.1 to 1000 in mstrnX, we expand the maximum to 50000 in order to correspond to large particles, like fog and rain drop. Those update will be introduced to

  14. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  15. DynamiX, numerical tool for design of next-generation x-ray telescopes.

    PubMed

    Chauvin, Maxime; Roques, Jean-Pierre

    2010-07-20

    We present a new code aimed at the simulation of grazing-incidence x-ray telescopes subject to deformations and demonstrate its ability with two test cases: the Simbol-X and the International X-ray Observatory (IXO) missions. The code, based on Monte Carlo ray tracing, computes the full photon trajectories up to the detector plane, accounting for the x-ray interactions and for the telescope motion and deformation. The simulation produces images and spectra for any telescope configuration using Wolter I mirrors and semiconductor detectors. This numerical tool allows us to study the telescope performance in terms of angular resolution, effective area, and detector efficiency, accounting for the telescope behavior. We have implemented an image reconstruction method based on the measurement of the detector drifts by an optical sensor metrology. Using an accurate metrology, this method allows us to recover the loss of angular resolution induced by the telescope instability. In the framework of the Simbol-X mission, this code was used to study the impacts of the parameters on the telescope performance. In this paper we present detailed performance analysis of Simbol-X, taking into account the satellite motions and the image reconstruction. To illustrate the versatility of the code, we present an additional performance analysis with a particular configuration of IXO.

  16. Epp: A C++ EGSnrc user code for x-ray imaging and scattering simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lippuner, Jonas; Elbakri, Idris A.; Cui Congwu

    2011-03-15

    Purpose: Easy particle propagation (Epp) is a user code for the EGSnrc code package based on the C++ class library egspp. A main feature of egspp (and Epp) is the ability to use analytical objects to construct simulation geometries. The authors developed Epp to facilitate the simulation of x-ray imaging geometries, especially in the case of scatter studies. While direct use of egspp requires knowledge of C++, Epp requires no programming experience. Methods: Epp's features include calculation of dose deposited in a voxelized phantom and photon propagation to a user-defined imaging plane. Projection images of primary, single Rayleigh scattered, singlemore » Compton scattered, and multiple scattered photons may be generated. Epp input files can be nested, allowing for the construction of complex simulation geometries from more basic components. To demonstrate the imaging features of Epp, the authors simulate 38 keV x rays from a point source propagating through a water cylinder 12 cm in diameter, using both analytical and voxelized representations of the cylinder. The simulation generates projection images of primary and scattered photons at a user-defined imaging plane. The authors also simulate dose scoring in the voxelized version of the phantom in both Epp and DOSXYZnrc and examine the accuracy of Epp using the Kawrakow-Fippel test. Results: The results of the imaging simulations with Epp using voxelized and analytical descriptions of the water cylinder agree within 1%. The results of the Kawrakow-Fippel test suggest good agreement between Epp and DOSXYZnrc. Conclusions: Epp provides the user with useful features, including the ability to build complex geometries from simpler ones and the ability to generate images of scattered and primary photons. There is no inherent computational time saving arising from Epp, except for those arising from egspp's ability to use analytical representations of simulation geometries. Epp agrees with DOSXYZnrc in dose calculation

  17. Hard X-ray imaging from Explorer

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Murray, S. S.

    1981-01-01

    Coded aperture X-ray detectors were applied to obtain large increases in sensitivity as well as angular resolution. A hard X-ray coded aperture detector concept is described which enables very high sensitivity studies persistent hard X-ray sources and gamma ray bursts. Coded aperture imaging is employed so that approx. 2 min source locations can be derived within a 3 deg field of view. Gamma bursts were located initially to within approx. 2 deg and X-ray/hard X-ray spectra and timing, as well as precise locations, derived for possible burst afterglow emission. It is suggested that hard X-ray imaging should be conducted from an Explorer mission where long exposure times are possible.

  18. Convolutional coding results for the MVM '73 X-band telemetry experiment

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1978-01-01

    Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.

  19. Simulation the spatial resolution of an X-ray imager based on zinc oxide nanowires in anodic aluminium oxide membrane by using MCNP and OPTICS Codes

    NASA Astrophysics Data System (ADS)

    Samarin, S. N.; Saramad, S.

    2018-05-01

    The spatial resolution of a detector is a very important parameter for x-ray imaging. A bulk scintillation detector because of spreading of light inside the scintillator does't have a good spatial resolution. The nanowire scintillators because of their wave guiding behavior can prevent the spreading of light and can improve the spatial resolution of traditional scintillation detectors. The zinc oxide (ZnO) scintillator nanowire, with its simple construction by electrochemical deposition in regular hexagonal structure of Aluminum oxide membrane has many advantages. The three dimensional absorption of X-ray energy in ZnO scintillator is simulated by a Monte Carlo transport code (MCNP). The transport, attenuation and scattering of the generated photons are simulated by a general-purpose scintillator light response simulation code (OPTICS). The results are compared with a previous publication which used a simulation code of the passage of particles through matter (Geant4). The results verify that this scintillator nanowire structure has a spatial resolution less than one micrometer.

  20. Federal Logistics Information Systems. FLIS Procedures Manual. Document Identifier Code Input/Output Formats (Variable Length). Volume 9.

    DTIC Science & Technology

    1997-04-01

    DATA COLLABORATORS 0001N B NQ 8380 NUMBER OF DATA RECEIVERS 0001N B NQ 2533 AUTHORIZED ITEM IDENTIFICATION DATA COLLABORATOR CODE 0002 ,X B 03 18 TD...01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 9505 TYPE OF SCREENING CODE 0001A 01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 4690 OUTPUT DATA... 9505 TYPE OF SCREENING CODE 0001A 2 89 2910 REFERENCE NUMBER CATEGORY CODE (RNCC) 0001X 2 89 4780 REFERENCE NUMBER VARIATION CODE (RNVC) 0001 N 2 89

  1. Understanding the Cray X1 System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    2004-01-01

    This paper helps the reader understand the characteristics of the Cray X1 vector supercomputer system, and provides hints and information to enable the reader to port codes to the system. It provides a comparison between the basic performance of the X1 platform and other platforms that are available at NASA Ames Research Center. A set of codes, solving the Laplacian equation with different parallel paradigms, is used to understand some features of the X1 compiler. An example code from the NAS Parallel Benchmarks is used to demonstrate performance optimization on the X1 platform.

  2. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  3. Enrichment of Circular Code Motifs in the Genes of the Yeast Saccharomyces cerevisiae.

    PubMed

    Michel, Christian J; Ngoune, Viviane Nguefack; Poch, Olivier; Ripp, Raymond; Thompson, Julie D

    2017-12-03

    A set X of 20 trinucleotides has been found to have the highest average occurrence in the reading frame, compared to the two shifted frames, of genes of bacteria, archaea, eukaryotes, plasmids and viruses. This set X has an interesting mathematical property, since X is a maximal C3 self-complementary trinucleotide circular code. Furthermore, any motif obtained from this circular code X has the capacity to retrieve, maintain and synchronize the original (reading) frame. Since 1996, the theory of circular codes in genes has mainly been developed by analysing the properties of the 20 trinucleotides of X, using combinatorics and statistical approaches. For the first time, we test this theory by analysing the X motifs, i.e., motifs from the circular code X, in the complete genome of the yeast Saccharomyces cerevisiae . Several properties of X motifs are identified by basic statistics (at the frequency level), and evaluated by comparison to R motifs, i.e., random motifs generated from 30 different random codes R. We first show that the frequency of X motifs is significantly greater than that of R motifs in the genome of S. cerevisiae . We then verify that no significant difference is observed between the frequencies of X and R motifs in the non-coding regions of S. cerevisiae , but that the occurrence number of X motifs is significantly higher than R motifs in the genes (protein-coding regions). This property is true for all cardinalities of X motifs (from 4 to 20) and for all 16 chromosomes. We further investigate the distribution of X motifs in the three frames of S. cerevisiae genes and show that they occur more frequently in the reading frame, regardless of their cardinality or their length. Finally, the ratio of X genes, i.e., genes with at least one X motif, to non-X genes, in the set of verified genes is significantly different to that observed in the set of putative or dubious genes with no experimental evidence. These results, taken together, represent the first

  4. Combined trellis coding with asymmetric MPSK modulation: An MSAT-X report

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1985-01-01

    Traditionally symmetric, multiple phase-shift-keyed (MPSK) signal constellations, i.e., those with uniformly spaced signal points around the circle, have been used for both uncoded and coded systems. Although symmetric MPSK signal constellations are optimum for systems with no coding, the same is not necessarily true for coded systems. This appears to show that by designing the signal constellations to be asymmetric, one can, in many instances, obtain a significant performance improvement over the traditional symmetric MPSK constellations combined with trellis coding. The joint design of n/(n + 1) trellis codes and asymmetric 2 sup n + 1 - point MPSK is considered, which has a unity bandwidth expansion relative to uncoded 2 sup n-point symmetric MPSK. The asymptotic performance gains due to coding and asymmetry are evaluated in terms of the minimum free Euclidean distance free of the trellis. A comparison of the maximum value of this performance measure with the minimum distance d sub min of the uncoded system is an indication of the maximum reduction in required E sub b/N sub O that can be achieved for arbitrarily small system bit-error rates. It is to be emphasized that the introduction of asymmetry into the signal set does not effect the bandwidth of power requirements of the system; hence, the above-mentioned improvements in performance come at little or no cost. MPSK signal sets in coded systems appear in the work of Divsalar.

  5. How to Build a Time Machine: Interfacing Hydrodynamics, Ionization Calculations and X-ray Spectral Codes for Supernova Remnants

    NASA Astrophysics Data System (ADS)

    Badenes, Carlos

    2006-02-01

    Thanks to Chandra and XMM-Newton, spatially resolved spectroscopy of SNRsin the X-ray band has become a reality. Several impressive data sets forejecta-dominated SNRs can now be found in the archives, the Cas A VLP justbeing one (albeit probably the most spectacular) example. However, it isoften hard to establish quantitative, unambiguous connections between theX-ray observations of SNRs and the dramatic events involved in a corecollapse or thermonuclear SN explosion. The reason for this is that thevery high quality of the data sets generated by Chandra and XMM for thelikes of Cas A, SNR 292.0+1.8, Tycho, and SN 1006 has surpassed our abilityto analyze them. The core of the problem is in the transient nature of theplasmas in SNRs, which results in anintimate relationship between the structure of the ejecta and AM, the SNRdynamics arising from their interaction, and the ensuing X-rayemission. Thus, the ONLY way to understand the X-ray observations ofejecta-dominated SNRs at all levels, from the spatially integrated spectrato the subarcsecond scales that can be resolved by Chandra, is to couplehydrodynamic simulations to nonequilibrium ionization (NEI) calculationsand X-ray spectral codes. I will review the basic ingredients that enterthis kind of calculations, and what are the prospects for using them tounderstand the X-ray emission from the shocked ejecta in young SNRs. Thisunderstanding (when it is possible), can turn SNRs into veritable timemachines, revealing the secrets of the titanic explosions that generatedthem hundreds of years ago.

  6. Research on effect of emission uniformity on X-band relativistic backward oscillator using conformal PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zaigao

    2016-07-15

    Explosive emission cathodes (EECs) are adopted in relativistic backward wave oscillators (RBWOs) to generate intense relativistic electron beam. The emission uniformity of the EEC can render saturation of the power generation unstable and the output mode impure. However, the direct measurement of the plasma parameters on the cathode surface is quite difficult and there are very few related numerical study reports about this issue. In this paper, a self-developed three-dimensional conformal fully electromagnetic particle in cell code is used to study the effect of emission uniformity on the X-band RBWO; the electron explosive emission model and the field emission modelmore » are both implemented in the same cathode surface, and the local field enhancement factor is also considered in the field emission model. The RBWO with a random nonuniform EEC is thoroughly studied using this code; the simulation results reveal that when the area ratio of cathode surface for electron explosive emission is 80%, the output power is unstable and the output mode is impure. When the annular EEC does not emit electron in the angle range of 30°, the RBWO can also operate normally.« less

  7. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  8. Absence of coding mutations in the X-linked genes neuroligin 3 and neuroligin 4 in individuals with autism from the IMGSAC collection.

    PubMed

    Blasi, Francesca; Bacchelli, Elena; Pesaresi, Giulia; Carone, Simona; Bailey, Anthony J; Maestrini, Elena

    2006-04-05

    Neuroligin abnormalities have been recently implicated in the aetiology of autism spectrum disorders (ASD), given the finding of point mutations in the two X-linked genes NLGN3 and NLGN4X and the important role of neuroligins in synaptogenesis. To enquire on the relevance and frequency of neuroligin mutations in ASD, we performed a mutation screening of NLGN3 and NLGN4X in a sample of 124 autism probands from the International Molecular Genetic Study of Autism Consortium (IMGSAC). We identified a new non-synonymous variant in NLGN3 (Thr632Ala), which is likely to be a rare polymorphism. Our data indicate that coding mutations in these genes are very rarely associated to ASD. Copyright 2006 Wiley-Liss, Inc.

  9. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., power gurdy TROLL X X 15 X X All other gear types OTH X X ADF&G GEAR CODES Diving 11 X X Dredge 22 X X Dredge, hydro/mechanical 23 X X Fish ladder/raceway 77 X X Fish wheel 08 X X Gillnet, drift 03 X X...

  10. Accuracy assessment and characterization of x-ray coded aperture coherent scatter spectral imaging for breast cancer classification

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2017-01-01

    Abstract. Although transmission-based x-ray imaging is the most commonly used imaging approach for breast cancer detection, it exhibits false negative rates higher than 15%. To improve cancer detection accuracy, x-ray coherent scatter computed tomography (CSCT) has been explored to potentially detect cancer with greater consistency. However, the 10-min scan duration of CSCT limits its possible clinical applications. The coded aperture coherent scatter spectral imaging (CACSSI) technique has been shown to reduce scan time through enabling single-angle imaging while providing high detection accuracy. Here, we use Monte Carlo simulations to test analytical optimization studies of the CACSSI technique, specifically for detecting cancer in ex vivo breast samples. An anthropomorphic breast tissue phantom was modeled, a CACSSI imaging system was virtually simulated to image the phantom, a diagnostic voxel classification algorithm was applied to all reconstructed voxels in the phantom, and receiver-operator characteristics analysis of the voxel classification was used to evaluate and characterize the imaging system for a range of parameters that have been optimized in a prior analytical study. The results indicate that CACSSI is able to identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) in tissue samples with a cancerous voxel identification area-under-the-curve of 0.94 through a scan lasting less than 10 s per slice. These results show that coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue within ex vivo samples. Furthermore, the results indicate potential CACSSI imaging system configurations for implementation in subsequent imaging development studies. PMID:28331884

  11. Turtle 24.0 diffusion depletion code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altomare, S.; Barry, R.F.

    1971-09-01

    TURTLE is a two-group, two-dimensional (x-y, x-z, r-z) neutron diffusion code featuring a direct treatment of the nonlinear effects of xenon, enthalpy, and Doppler. Fuel depletion is allowed. TURTLE was written for the study of azimuthal xenon oscillations, but the code is useful for general analysis. The input is simple, fuel management is handled directly, and a boron criticality search is allowed. Ten thousand space points are allowed (over 20,000 with diagonal symmetry). TURTLE is written in FORTRAN IV and is tailored for the present CDC-6600. The program is corecontained. Provision is made to save data on tape for futuremore » reference. ( auth)« less

  12. Ftx is a non-coding RNA which affects Xist expression and chromatin structure within the X-inactivation center region.

    PubMed

    Chureau, Corinne; Chantalat, Sophie; Romito, Antonio; Galvani, Angélique; Duret, Laurent; Avner, Philip; Rougeulle, Claire

    2011-02-15

    X chromosome inactivation (XCI) is an essential epigenetic process which involves several non-coding RNAs (ncRNAs), including Xist, the master regulator of X-inactivation initiation. Xist is flanked in its 5' region by a large heterochromatic hotspot, which contains several transcription units including a gene of unknown function, Ftx (five prime to Xist). In this article, we describe the characterization and functional analysis of murine Ftx. We present evidence that Ftx produces a conserved functional long ncRNA, and additionally hosts microRNAs (miR) in its introns. Strikingly, Ftx partially escapes X-inactivation and is upregulated specifically in female ES cells at the onset of X-inactivation, an expression profile which closely follows that of Xist. We generated Ftx null ES cells to address the function of this gene. In these cells, only local changes in chromatin marks are detected within the hotspot, indicating that Ftx is not involved in the global maintenance of the heterochromatic structure of this region. The Ftx mutation, however, results in widespread alteration of transcript levels within the X-inactivation center (Xic) and particularly important decreases in Xist RNA levels, which were correlated with increased DNA methylation at the Xist CpG island. Altogether our results indicate that Ftx is a positive regulator of Xist and lead us to propose that Ftx is a novel ncRNA involved in XCI.

  13. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  14. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Gear Codes, Descriptions, and Use 15 Table... ALASKA Pt. 679, Table 15 Table 15 to Part 679—Gear Codes, Descriptions, and Use Gear Codes, Descriptions, and Use (X indicates where this code is used) Name of gear Use alphabetic code to complete the...

  15. Maximum likelihood decoding of Reed Solomon Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudan, M.

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihoodmore » decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.« less

  16. Intricate Li-Sn Disorder in Rare-Earth Metal-Lithium Stannides. Crystal Chemistry of RE3Li4- xSn4+ x (RE = La-Nd, Sm; x < 0.3) and Eu7Li8- xSn10+ x ( x ≈ 2.0).

    PubMed

    Suen, Nian-Tzu; Guo, Sheng-Ping; Hoos, James; Bobev, Svilen

    2018-05-07

    Reported are the syntheses, crystal structures, and electronic structures of six rare-earth metal-lithium stannides with the general formulas RE 3 Li 4- x Sn 4+ x (RE = La-Nd, Sm) and Eu 7 Li 8- x Sn 10+ x . These new ternary compounds have been synthesized by high-temperature reactions of the corresponding elements. Their crystal structures have been established using single-crystal X-ray diffraction methods. The RE 3 Li 4- x Sn 4+ x phases crystallize in the orthorhombic body-centered space group Immm (No. 71) with the Zr 3 Cu 4 Si 4 structure type (Pearson code oI22), and the Eu 7 Li 8- x Sn 10+ x phase crystallizes in the orthorhombic base-centered space group Cmmm (No. 65) with the Ce 7 Li 8 Ge 10 structure type (Pearson code oC50). Both structures can be consdered as part of the [RESn 2 ] n [RELi 2 Sn] m homologous series, wherein the structures are intergrowths of imaginary RESn 2 (AlB 2 -like structure type) and RELi 2 Sn (MgAl 2 Cu-like structure type) fragments. Close examination the structures indicates complex occupational Li-Sn disorder, apparently governed by the drive of the structure to achieve an optimal number of valence electrons. This conclusion based on experimental results is supported by detailed electronic structure calculations, carried out using the tight-binding linear muffin-tin orbital method.

  17. xRage Equation of State

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    2016-08-16

    The xRage code supports a variety of hydrodynamic equation of state (EOS) models. In practice these are generally accessed in the executing code via a pressure-temperature based table look up. This document will describe the various models supported by these codes and provide details on the algorithms used to evaluate the equation of state.

  18. Predictions of GPS X-Set Performance during the Places Experiment

    DTIC Science & Technology

    1979-07-01

    previously existing GPS X-set receiver simulation was modified to include the received signal spectrum and the receiver code correlation operation... CORRELATION OPERATION The X-set receiver simulation documented in Reference 3-1 is a direct sampled -data digital implementation of the GPS X-set...ul(t) -sin w2t From Carrier and Code Loops (wit +0 1 (t)) Figure 3-6. Simplified block diagram of code correlator operation and I-Q sampling . 6 I

  19. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes

    PubMed Central

    Khajeh, Masoud; Safigholi, Habib

    2015-01-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  20. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  1. Modelling the thermal conductivity of (U xTh 1-x)O 2 and (U xPu 1-x)O 2

    DOE PAGES

    Cooper, M. W. D.; Middleburgh, S. C.; Grimes, R. W.

    2015-07-15

    The degradation of thermal conductivity due to the non-uniform cation lattice of (U xTh 1-x)O 2 and (U xPu 1-x)O 2 solid solutions has been investigated by molecular dynamics, using the non-equilibrium method, from 300 to 2000 K. Degradation of thermal conductivity is predicted in (U xTh 1-x)O 2 and (U xPu 1-x)O 2 as compositions deviate from the pure end members: UO 2, PuO 2 and ThO 2. The reduction in thermal conductivity is most apparent at low temperatures where phonon-defect scattering dominates over phonon-phonon interactions. The effect is greater for (U xTh 1-x)O 2 than U xPu 1-x)Omore » 2 due to the greater mismatch in cation size. Parameters for an analytical expressions have been developed that describe the predicted thermal conductivities over the full temperature and compositional ranges. Finally, these expressions may be used in higher level fuel performance codes.« less

  2. Identifying Objects via Encased X-Ray-Fluorescent Materials - the Bar Code Inside

    NASA Technical Reports Server (NTRS)

    Schramm, Harry F.; Kaiser, Bruce

    2005-01-01

    Systems for identifying objects by means of x-ray fluorescence (XRF) of encased labeling elements have been developed. The XRF spectra of objects so labeled would be analogous to the external bar code labels now used to track objects in everyday commerce. In conjunction with computer-based tracking systems, databases, and labeling conventions, the XRF labels could be used in essentially the same manner as that of bar codes to track inventories and to record and process commercial transactions. In addition, as summarized briefly below, embedded XRF labels could be used to verify the authenticity of products, thereby helping to deter counterfeiting and fraud. A system, as described above, is called an encased core product identification and authentication system (ECPIAS). The ECPIAS concept is a modified version of that of a related recently initiated commercial development of handheld XRF spectral scanners that would identify alloys or detect labeling elements deposited on the surfaces of objects. In contrast, an ECPIAS would utilize labeling elements encased within the objects of interest. The basic ECPIAS concept is best illustrated by means of an example of one of several potential applications: labeling of cultured pearls by labeling the seed particles implanted in oysters to grow the pearls. Each pearl farmer would be assigned a unique mixture of labeling elements that could be distinguished from the corresponding mixtures of other farmers. The mixture would be either incorporated into or applied to the surfaces of the seed prior to implantation in the oyster. If necessary, the labeled seed would be further coated to make it nontoxic to the oyster. After implantation, the growth of layers of mother of pearl on the seed would encase the XRF labels, making these labels integral, permanent parts of the pearls that could not be removed without destroying the pearls themselves. The XRF labels would be read by use of XRF scanners, the spectral data outputs of which

  3. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  4. Techniques for the analysis of data from coded-mask X-ray telescopes

    NASA Technical Reports Server (NTRS)

    Skinner, G. K.; Ponman, T. J.; Hammersley, A. P.; Eyles, C. J.

    1987-01-01

    Several techniques useful in the analysis of data from coded-mask telescopes are presented. Methods of handling changes in the instrument pointing direction are reviewed and ways of using FFT techniques to do the deconvolution considered. Emphasis is on techniques for optimally-coded systems, but it is shown that the range of systems included in this class can be extended through the new concept of 'partial cycle averaging'.

  5. Deterministic and unambiguous dense coding

    NASA Astrophysics Data System (ADS)

    Wu, Shengjun; Cohen, Scott M.; Sun, Yuqing; Griffiths, Robert B.

    2006-04-01

    Optimal dense coding using a partially-entangled pure state of Schmidt rank Dmacr and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most Ld messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τx ) Bob knows for sure that Alice sent message x , and when it fails (probability 1-τx ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For Dmacr ⩽D a bound is obtained for Ld in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes [Phys. Rev. A71, 012311 (2005)]. For Dmacr >D it is shown that Ld is strictly less than D2 unless Dmacr is an integer multiple of D , in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for Dmacr ⩽D , assuming τx>0 for a set of Dmacr D messages, and a bound is obtained for the average ⟨1/τ⟩ . A bound on the average ⟨τ⟩ requires an additional assumption of encoding by isometries (unitaries when Dmacr =D ) that are orthogonal for different messages. Both bounds are saturated when τx is a constant independent of x , by a protocol based on one-shot entanglement concentration. For Dmacr >D it is shown that (at least) D2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states.

  6. Optical performances of the FM JEM-X masks

    NASA Astrophysics Data System (ADS)

    Reglero, V.; Rodrigo, J.; Velasco, T.; Gasent, J. L.; Chato, R.; Alamo, J.; Suso, J.; Blay, P.; Martínez, S.; Doñate, M.; Reina, M.; Sabau, D.; Ruiz-Urien, I.; Santos, I.; Zarauz, J.; Vázquez, J.

    2001-09-01

    The JEM-X Signal Multiplexing Systems are large HURA codes "written" in a pure tungsten plate 0.5 mm thick. 24.247 hexagonal pixels (25% open) are spread over a total area of 535 mm diameter. The tungsten plate is embedded in a mechanical structure formed by a Ti ring, a pretensioning system (Cu-Be) and an exoskeleton structure that provides the required stiffness. The JEM-X masks differ from the SPI and IBIS masks on the absence of a code support structure covering the mask assembly. Open pixels are fully transparent to X-rays. The scope of this paper is to report the optical performances of the FM JEM-X masks defined by uncertainties on the pixel location (centroid) and size coming from the manufacturing and assembly processes. Stability of the code elements under thermoelastic deformations is also discussed. As a general statement, JEM-X Mask optical properties are nearly one order of magnitude better than specified in 1994 during the ESA instrument selection.

  7. Dynamic Detection of Malicious Code in COTS Software

    DTIC Science & Technology

    2000-04-01

    run the following documented hostile applets or ActiveX of these tools work only on mobile code (Java, ActiveX , controls: 16-11 Hostile Applets Tiny...Killer App Exploder Runner ActiveX Check Spy eSafe Protect Desktop 9/9 blocked NB B NB 13/17 blocked NB Surfinshield Online 9/9 blocked NB B B 13/17...Exploder is an ActiveX control top (@). that performs a clean shutdown of your computer. The interface is attractive, although rather complex, as McLain’s

  8. Advanced x-ray imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Callas, John L. (Inventor); Soli, George A. (Inventor)

    1998-01-01

    An x-ray spectrometer that also provides images of an x-ray source. Coded aperture imaging techniques are used to provide high resolution images. Imaging position-sensitive x-ray sensors with good energy resolution are utilized to provide excellent spectroscopic performance. The system produces high resolution spectral images of the x-ray source which can be viewed in any one of a number of specific energy bands.

  9. Parallel processing a three-dimensional free-lagrange code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1989-01-01

    A three-dimensional, time-dependent free-Lagrange hydrodynamics code has been multitasked and autotasked on a CRAY X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the CRAY multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The three-dimensional algorithm has presented a number of problems that simpler algorithms, such as those for one-dimensional hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a CRAY-1, to a multitasking code are discussed. Autotasking of a rewritten versionmore » of the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given.« less

  10. The Alba ray tracing code: ART

    NASA Astrophysics Data System (ADS)

    Nicolas, Josep; Barla, Alessandro; Juanhuix, Jordi

    2013-09-01

    The Alba ray tracing code (ART) is a suite of Matlab functions and tools for the ray tracing simulation of x-ray beamlines. The code is structured in different layers, which allow its usage as part of optimization routines as well as an easy control from a graphical user interface. Additional tools for slope error handling and for grating efficiency calculations are also included. Generic characteristics of ART include the accumulation of rays to improve statistics without memory limitations, and still providing normalized values of flux and resolution in physically meaningful units.

  11. (U) Ristra Next Generation Code Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.; Daniel, David John

    LANL’s Weapons Physics management (ADX) and ASC program office have defined a strategy for exascale-class application codes that follows two supportive, and mutually risk-mitigating paths: evolution for established codes (with a strong pedigree within the user community) based upon existing programming paradigms (MPI+X); and Ristra (formerly known as NGC), a high-risk/high-reward push for a next-generation multi-physics, multi-scale simulation toolkit based on emerging advanced programming systems (with an initial focus on data-flow task-based models exemplified by Legion [5]). Development along these paths is supported by the ATDM, IC, and CSSE elements of the ASC program, with the resulting codes forming amore » common ecosystem, and with algorithm and code exchange between them anticipated. Furthermore, solution of some of the more challenging problems of the future will require a federation of codes working together, using established-pedigree codes in partnership with new capabilities as they come on line. The role of Ristra as the high-risk/high-reward path for LANL’s codes is fully consistent with its role in the Advanced Technology Development and Mitigation (ATDM) sub-program of ASC (see Appendix C), in particular its emphasis on evolving ASC capabilities through novel programming models and data management technologies.« less

  12. Color coding of control room displays: the psychocartography of visual layering effects.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2007-06-01

    To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).

  13. Hard X-ray Observation of Cygnus X-1 By the Marshall Imaging X-ray Experiment (MIXE2)

    NASA Technical Reports Server (NTRS)

    Minamitani, Takahisa; Apple, J. A.; Austin, R. A.; Dietz, K. L.; Koloziejczak, J. J.; Ramsey, B. D.; Weisskopf, M. C.

    1998-01-01

    The second generation of the Marshall Imaging X-ray Experiment (MIXE2) was flown from Fort Sumner, New Mexico on May 7-8, 1997. The experiment consists of coded-aperture telescope with a field of view of 1.8 degrees (FWHM) and an angular resolution of 6.9 arcminutes. The detector is a large (7.84x10(exp 4) sq cm) effective area microstrip proportional counter filled with 2.0x10(exp5) Pascals of xenon with 2% isobutylene. We present MIXE2 observation of the 20-80keV spectrum and timing variability of Cygnus X-1 made during balloon flight.

  14. Implementation of Soft X-ray Tomography on NSTX

    NASA Astrophysics Data System (ADS)

    Tritz, K.; Stutman, D.; Finkenthal, M.; Granetz, R.; Menard, J.; Park, W.

    2003-10-01

    A set of poloidal ultrasoft X-ray arrays is operated by the Johns Hopkins group on NSTX. To enable MHD mode analysis independent of the magnetic reconstruction, the McCormick-Granetz tomography code developed at MIT is being adapted to the NSTX geometry. Tests of the code using synthetic data show that that present X-ray system is adequate for m=1 tomography. In addition, we have found that spline basis functions may be better suited than Bessel functions for the reconstruction of radially localized phenomena in NSTX. The tomography code was also used to determine the necessary array expansion and optimal array placement for the characterization of higher m modes (m=2,3) in the future. Initial reconstruction of experimental soft X-ray data has been performed for m=1 internal modes, which are often encountered in high beta NSTX discharges. The reconstruction of these modes will be compared to predictions from the M3D code and magnetic measurements.

  15. Large Coded Aperture Mask for Spaceflight Hard X-ray Images

    NASA Technical Reports Server (NTRS)

    Vigneau, Danielle N.; Robinson, David W.

    2002-01-01

    The 2.6 square meter coded aperture mask is a vital part of the Burst Alert Telescope on the Swift mission. A random, but known pattern of more than 50,000 lead tiles, each 5 mm square, was bonded to a large honeycomb panel which projects a shadow on the detector array during a gamma ray burst. A two-year development process was necessary to explore ideas, apply techniques, and finalize procedures to meet the strict requirements for the coded aperture mask. Challenges included finding a honeycomb substrate with minimal gamma ray attenuation, selecting an adhesive with adequate bond strength to hold the tiles in place but soft enough to allow the tiles to expand and contract without distorting the panel under large temperature gradients, and eliminating excess adhesive from all untiled areas. The largest challenge was to find an efficient way to bond the > 50,000 lead tiles to the panel with positional tolerances measured in microns. In order to generate the desired bondline, adhesive was applied and allowed to cure to each tile. The pre-cured tiles were located in a tool to maintain positional accuracy, wet adhesive was applied to the panel, and it was lowered to the tile surface with synchronized actuators. Using this procedure, the entire tile pattern was transferred to the large honeycomb panel in a single bond. The pressure for the bond was achieved by enclosing the entire system in a vacuum bag. Thermal vacuum and acoustic tests validated this approach. This paper discusses the methods, materials, and techniques used to fabricate this very large and unique coded aperture mask for the Swift mission.

  16. GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.

    PubMed

    E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N

    2018-03-01

    GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.

  17. Flexible manipulation of terahertz wave reflection using polarization insensitive coding metasurfaces.

    PubMed

    Jiu-Sheng, Li; Ze-Jiang, Zhao; Jian-Quan, Yao

    2017-11-27

    In order to extend to 3-bit encoding, we propose notched-wheel structures as polarization insensitive coding metasurfaces to control terahertz wave reflection and suppress backward scattering. By using a coding sequence of "00110011…" along x-axis direction and 16 × 16 random coding sequence, we investigate the polarization insensitive properties of the coding metasurfaces. By designing the coding sequences of the basic coding elements, the terahertz wave reflection can be flexibly manipulated. Additionally, radar cross section (RCS) reduction in the backward direction is less than -10dB in a wide band. The present approach can offer application for novel terahertz manipulation devices.

  18. Weight-4 Parity Checks on a Surface Code Sublattice with Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Takita, Maika; Corcoles, Antonio; Magesan, Easwar; Bronn, Nicholas; Hertzberg, Jared; Gambetta, Jay; Steffen, Matthias; Chow, Jerry

    We present a superconducting qubit quantum processor design amenable to the surface code architecture. In such architecture, parity checks on the data qubits, performed by measuring their X- and Z- syndrome qubits, constitute a critical aspect. Here we show fidelities and outcomes of X- and Z-parity measurements done on a syndrome qubit in a full plaquette consisting of one syndrome qubit coupled via bus resonators to four code qubits. Parities are measured after four code qubits are prepared into sixteen initial states in each basis. Results show strong dependence on ZZ between qubits on the same bus resonators. This work is supported by IARPA under Contract W911NF-10-1-0324.

  19. Read-Write-Codes: An Erasure Resilient Encoding System for Flexible Reading and Writing in Storage Networks

    NASA Astrophysics Data System (ADS)

    Mense, Mario; Schindelhauer, Christian

    We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.

  20. Parallel processing a real code: A case history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1988-01-01

    A three-dimensional, time-dependent Free-Lagrange hydrodynamics code has been multitasked and autotasked on a Cray X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the Cray multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The 3-D algorithm has presented a number of problems that simpler algorithms, such as 1-D hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a Cray 1, to a multitasking code are discussed, Autotasking of a rewritten version ofmore » the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given. 8 refs., 13 figs.« less

  1. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier

  2. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  3. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  4. NORTICA—a new code for cyclotron analysis

    NASA Astrophysics Data System (ADS)

    Gorelov, D.; Johnson, D.; Marti, F.

    2001-12-01

    The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER [1] developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state.

  5. Fusion PIC code performance analysis on the Cori KNL system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less

  6. Overview of the ArbiTER edge plasma eigenvalue code

    NASA Astrophysics Data System (ADS)

    Baver, Derek; Myra, James; Umansky, Maxim

    2011-10-01

    The Arbitrary Topology Equation Reader, or ArbiTER, is a flexible eigenvalue solver that is currently under development for plasma physics applications. The ArbiTER code builds on the equation parser framework of the existing 2DX code, extending it to include a topology parser. This will give the code the capability to model problems with complicated geometries (such as multiple X-points and scrape-off layers) or model equations with arbitrary numbers of dimensions (e.g. for kinetic analysis). In the equation parser framework, model equations are not included in the program's source code. Instead, an input file contains instructions for building a matrix from profile functions and elementary differential operators. The program then executes these instructions in a sequential manner. These instructions may also be translated into analytic form, thus giving the code transparency as well as flexibility. We will present an overview of how the ArbiTER code is to work, as well as preliminary results from early versions of this code. Work supported by the U.S. DOE.

  7. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    NASA Astrophysics Data System (ADS)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids

  8. Diagnostic x-ray dosimetry using Monte Carlo simulation.

    PubMed

    Ioppolo, J L; Price, R I; Tuchyna, T; Buckley, C E

    2002-05-21

    An Electron Gamma Shower version 4 (EGS4) based user code was developed to simulate the absorbed dose in humans during routine diagnostic radiological procedures. Measurements of absorbed dose using thermoluminescent dosimeters (TLDs) were compared directly with EGS4 simulations of absorbed dose in homogeneous, heterogeneous and anthropomorphic phantoms. Realistic voxel-based models characterizing the geometry of the phantoms were used as input to the EGS4 code. The voxel geometry of the anthropomorphic Rando phantom was derived from a CT scan of Rando. The 100 kVp diagnostic energy x-ray spectra of the apparatus used to irradiate the phantoms were measured, and provided as input to the EGS4 code. The TLDs were placed at evenly spaced points symmetrically about the central beam axis, which was perpendicular to the cathode-anode x-ray axis at a number of depths. The TLD measurements in the homogeneous and heterogenous phantoms were on average within 7% of the values calculated by EGS4. Estimates of effective dose with errors less than 10% required fewer numbers of photon histories (1 x 10(7)) than required for the calculation of dose profiles (1 x 10(9)). The EGS4 code was able to satisfactorily predict and thereby provide an instrument for reducing patient and staff effective dose imparted during radiological investigations.

  9. PharmTeX: a LaTeX-Based Open-Source Platform for Automated Reporting Workflow.

    PubMed

    Rasmussen, Christian Hove; Smith, Mike K; Ito, Kaori; Sundararajan, Vijayakumar; Magnusson, Mats O; Niclas Jonsson, E; Fostvedt, Luke; Burger, Paula; McFadyen, Lynn; Tensfeldt, Thomas G; Nicholas, Timothy

    2018-03-16

    Every year, the pharmaceutical industry generates a large number of scientific reports related to drug research, development, and regulatory submissions. Many of these reports are created using text processing tools such as Microsoft Word. Given the large number of figures, tables, references, and other elements, this is often a tedious task involving hours of copying and pasting and substantial efforts in quality control (QC). In the present article, we present the LaTeX-based open-source reporting platform, PharmTeX, a community-based effort to make reporting simple, reproducible, and user-friendly. The PharmTeX creators put a substantial effort into simplifying the sometimes complex elements of LaTeX into user-friendly functions that rely on advanced LaTeX and Perl code running in the background. Using this setup makes LaTeX much more accessible for users with no prior LaTeX experience. A software collection was compiled for users not wanting to manually install the required software components. The PharmTeX templates allow for inclusion of tables directly from mathematical software output as well and figures from several formats. Code listings can be included directly from source. No previous experience and only a few hours of training are required to start writing reports using PharmTeX. PharmTeX significantly reduces the time required for creating a scientific report fully compliant with regulatory and industry expectations. QC is made much simpler, since there is a direct link between analysis output and report input. PharmTeX makes available to report authors the strengths of LaTeX document processing without the need for extensive training. Graphical Abstract ᅟ.

  10. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  11. Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1997-01-01

    In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).

  12. Validation of a Monte Carlo code system for grid evaluation with interference effect on Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Abel; White, Graeme L.; Davidson, Rob

    2018-02-01

    Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.

  13. Bootstrap current control studies in the Wendelstein 7-X stellarator using the free-plasma-boundary version of the SIESTA MHD equilibrium code

    NASA Astrophysics Data System (ADS)

    Peraza-Rodriguez, H.; Reynolds-Barredo, J. M.; Sanchez, R.; Tribaldos, V.; Geiger, J.

    2018-02-01

    The recently developed free-plasma-boundary version of the SIESTA MHD equilibrium code (Hirshman et al 2011 Phys. Plasmas 18 062504; Peraza-Rodriguez et al 2017 Phys. Plasmas 24 082516) is used for the first time to study scenarios with considerable bootstrap currents for the Wendelstein 7-X (W7-X) stellarator. Bootstrap currents in the range of tens of kAs can lead to the formation of unwanted magnetic island chains or stochastic regions within the plasma and alter the boundary rotational transform due to the small shear in W7-X. The latter issue is of relevance since the island divertor operation of W7-X relies on a proper positioning of magnetic island chains at the plasma edge to control the particle and energy exhaust towards the divertor plates. Two scenarios are examined with the new free-plasma-boundary capabilities of SIESTA: a freely evolving bootstrap current one that illustrates the difficulties arising from the dislocation of the boundary islands, and a second one in which off-axis electron cyclotron current drive (ECCD) is applied to compensate the effects of the bootstrap current and keep the island divertor configuration intact. SIESTA finds that off-axis ECCD is indeed able to keep the location and phase of the edge magnetic island chain unchanged, but it may also lead to an undesired stochastization of parts of the confined plasma if the EC deposition radial profile becomes too narrow.

  14. Silencing of X-Linked MicroRNAs by Meiotic Sex Chromosome Inactivation

    PubMed Central

    Royo, Hélène; Seitz, Hervé; ElInati, Elias; Peters, Antoine H. F. M.; Stadler, Michael B.; Turner, James M. A.

    2015-01-01

    During the pachytene stage of meiosis in male mammals, the X and Y chromosomes are transcriptionally silenced by Meiotic Sex Chromosome Inactivation (MSCI). MSCI is conserved in therian mammals and is essential for normal male fertility. Transcriptomics approaches have demonstrated that in mice, most or all protein-coding genes on the X chromosome are subject to MSCI. However, it is unclear whether X-linked non-coding RNAs behave in a similar manner. The X chromosome is enriched in microRNA (miRNA) genes, with many exhibiting testis-biased expression. Importantly, high expression levels of X-linked miRNAs (X-miRNAs) have been reported in pachytene spermatocytes, indicating that these genes may escape MSCI, and perhaps play a role in the XY-silencing process. Here we use RNA FISH to examine X-miRNA expression in the male germ line. We find that, like protein-coding X-genes, X-miRNAs are expressed prior to prophase I and are thereafter silenced during pachynema. X-miRNA silencing does not occur in mouse models with defective MSCI. Furthermore, X-miRNAs are expressed at pachynema when present as autosomally integrated transgenes. Thus, we conclude that silencing of X-miRNAs during pachynema in wild type males is MSCI-dependent. Importantly, misexpression of X-miRNAs during pachynema causes spermatogenic defects. We propose that MSCI represents a chromosomal mechanism by which X-miRNAs, and other potential X-encoded repressors, can be silenced, thereby regulating genes with critical late spermatogenic functions. PMID:26509798

  15. Coupling of PIES 3-D Equilibrium Code and NIFS Bootstrap Code with Applications to the Computation of Stellarator Equilibria

    NASA Astrophysics Data System (ADS)

    Monticello, D. A.; Reiman, A. H.; Watanabe, K. Y.; Nakajima, N.; Okamoto, M.

    1997-11-01

    The existence of bootstrap currents in both tokamaks and stellarators was confirmed, experimentally, more than ten years ago. Such currents can have significant effects on the equilibrium and stability of these MHD devices. In addition, stellarators, with the notable exception of W7-X, are predicted to have such large bootstrap currents that reliable equilibrium calculations require the self-consistent evaluation of bootstrap currents. Modeling of discharges which contain islands requires an algorithm that does not assume good surfaces. Only one of the two 3-D equilibrium codes that exist, PIES( Reiman, A. H., Greenside, H. S., Compt. Phys. Commun. 43), (1986)., can easily be modified to handle bootstrap current. Here we report on the coupling of the PIES 3-D equilibrium code and NIFS bootstrap code(Watanabe, K., et al., Nuclear Fusion 35) (1995), 335.

  16. VizieR Online Data Catalog: RefleX : X-ray-tracing code (Paltani+, 2017)

    NASA Astrophysics Data System (ADS)

    Paltani, S.; Ricci, C.

    2017-11-01

    We provide here the RefleX executable, for both Linux and MacOSX, together with the User Manual and example script file and output file Running (for instance): reflex_linux will produce the file reflex.out Note that the results may differ slightly depending on the OS, because of slight differences in some implementations numerical computations. The difference are scientifically meaningless. (5 data files).

  17. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    NASA Astrophysics Data System (ADS)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  18. Vector and Raster Data Storage Based on Morton Code

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.

    2018-05-01

    Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.

  19. X-ray spectral signatures of photoionized plasmas. [astrophysics

    NASA Technical Reports Server (NTRS)

    Liedahl, Duane A.; Kahn, Steven M.; Osterheld, Albert L.; Goldstein, William H.

    1990-01-01

    Plasma emission codes have become a standard tool for the analysis of spectroscopic data from cosmic X-ray sources. However, the assumption of collisional equilibrium, typically invoked in these codes, renders them inapplicable to many important astrophysical situations, particularly those involving X-ray photoionized nebulae. This point is illustrated by comparing model spectra which have been calculated under conditions appropriate to both coronal plasmas and X-ray photoionized plasmas. It is shown that the (3s-2p)/(3d-2p) line ratios in the Fe L-shell spectrum can be used to effectively discriminate between these two cases. This diagnostic will be especially useful for data analysis associated with AXAF and XMM, which will carry spectroscopic instrumentation with sufficient sensitivity and resolution to identify X-ray photoionized nebulae in a wide range of astrophysical environments.

  20. A performance comparison of the Cray-2 and the Cray X-MP

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald; Bailey, David H.

    1986-01-01

    A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.

  1. X-ray microlaminography with polycapillary optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dabrowski, K. M.; Dul, D. T.; Wrobel, A.

    2013-06-03

    We demonstrate layer-by-layer x-ray microimaging using polycapillary optics. The depth resolution is achieved without sample or source rotation and in a way similar to classical tomography or laminography. The method takes advantage from large angular apertures of polycapillary optics and from their specific microstructure, which is treated as a coded aperture. The imaging geometry is compatible with polychromatic x-ray sources and with scanning and confocal x-ray fluorescence setups.

  2. Modeling Multi-Bunch X-band Photoinjector Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsh, R A; Anderson, S G; Gibson, D J

    An X-band test station is being developed at LLNL to investigate accelerator optimization for future upgrades to mono-energetic gamma-ray technology at LLNL. The test station will consist of a 5.5 cell X-band rf photoinjector, single accelerator section, and beam diagnostics. Of critical import to the functioning of the LLNL X-band system with multiple electron bunches is the performance of the photoinjector. In depth modeling of the Mark 1 LLNL/SLAC X-band rf photoinjector performance will be presented addressing important challenges that must be addressed in order to fabricate a multi-bunch Mark 2 photoinjector. Emittance performance is evaluated under different nominal electronmore » bunch parameters using electrostatic codes such as PARMELA. Wake potential is analyzed using electromagnetic time domain simulations using the ACE3P code T3P. Plans for multi-bunch experiments and implementation of photoinjector advances for the Mark 2 design will also be discussed.« less

  3. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  4. Leadership Class Configuration Interaction Code - Status and Opportunities

    NASA Astrophysics Data System (ADS)

    Vary, James

    2011-10-01

    With support from SciDAC-UNEDF (www.unedf.org) nuclear theorists have developed and are continuously improving a Leadership Class Configuration Interaction Code (LCCI) for forefront nuclear structure calculations. The aim of this project is to make state-of-the-art nuclear structure tools available to the entire community of researchers including graduate students. The project includes codes such as NuShellX, MFDn and BIGSTICK that run a range of computers from laptops to leadership class supercomputers. Codes, scripts, test cases and documentation have been assembled, are under continuous development and are scheduled for release to the entire research community in November 2011. A covering script that accesses the appropriate code and supporting files is under development. In addition, a Data Base Management System (DBMS) that records key information from large production runs and archived results of those runs has been developed (http://nuclear.physics.iastate.edu/info/) and will be released. Following an outline of the project, the code structure, capabilities, the DBMS and current efforts, I will suggest a path forward that would benefit greatly from a significant partnership between researchers who use the codes, code developers and the National Nuclear Data efforts. This research is supported in part by DOE under grant DE-FG02-87ER40371 and grant DE-FC02-09ER41582 (SciDAC-UNEDF).

  5. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  6. X-ray metrology and performance of a 45-cm long x-ray deformable mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, Lisa A., E-mail: poyneer1@llnl.gov; Brejnholt, Nicolai F.; Hill, Randall

    2016-05-15

    We describe experiments with a 45-cm long x-ray deformable mirror (XDM) that have been conducted in End Station 2, Beamline 5.3.1 at the Advanced Light Source. A detailed description of the hardware implementation is provided. We explain our one-dimensional Fresnel propagation code that correctly handles grazing incidence and includes a model of the XDM. This code is used to simulate and verify experimental results. Initial long trace profiler metrology of the XDM at 7.5 keV is presented. The ability to measure a large (150-nm amplitude) height change on the XDM is demonstrated. The results agree well with the simulated experimentmore » at an error level of 1 μrad RMS. Direct imaging of the x-ray beam also shows the expected change in intensity profile at the detector.« less

  7. X-ray metrology and performance of a 45-cm long x-ray deformable mirror

    DOE PAGES

    Poyneer, Lisa A.; Brejnholt, Nicolai F.; Hill, Randall; ...

    2016-05-20

    We describe experiments with a 45-cm long x-ray deformable mirror (XDM) that have been conducted in End Station 2, Beamline 5.3.1 at the Advanced Light Source. A detailed description of the hardware implementation is provided. We explain our one-dimensional Fresnel propagation code that correctly handles grazing incidence and includes a model of the XDM. This code is used to simulate and verify experimental results. Initial long trace profiler metrology of the XDM at 7.5 keV is presented. The ability to measure a large (150-nm amplitude) height change on the XDM is demonstrated. The results agree well with the simulated experimentmore » at an error level of 1 μrad RMS. Lastly, direct imaging of the x-ray beam also shows the expected change in intensity profile at the detector.« less

  8. Residus de 2-formes differentielles sur les surfaces algebriques et applications aux codes correcteurs d'erreurs

    NASA Astrophysics Data System (ADS)

    Couvreur, A.

    2009-05-01

    The theory of algebraic-geometric codes has been developed in the beginning of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic curve X over a finite field, there are two different constructions of error-correcting codes. The first one, called "functional", uses some rational functions on X and the second one, called "differential", involves some rational 1-forms on this curve. Hundreds of papers are devoted to the study of such codes. In addition, a generalization of the functional construction for algebraic varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A few papers about such codes has been published, but nothing has been done concerning a generalization of the differential construction to the higher-dimensional case. In this thesis, we propose a differential construction of codes on algebraic surfaces. Afterwards, we study the properties of these codes and particularly their relations with functional codes. A pretty surprising fact is that a main difference with the case of curves appears. Indeed, if in the case of curves, a differential code is always the orthogonal of a functional one, this assertion generally fails for surfaces. Last observation motivates the study of codes which are the orthogonal of some functional code on a surface. Therefore, we prove that, under some condition on the surface, these codes can be realized as sums of differential codes. Moreover, we show that some answers to some open problems "a la Bertini" could give very interesting informations on the parameters of these codes.

  9. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  10. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  11. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  12. Main functions, recent updates, and applications of Synchrotron Radiation Workshop code

    NASA Astrophysics Data System (ADS)

    Chubar, Oleg; Rakitin, Maksim; Chen-Wiegart, Yu-Chen Karen; Chu, Yong S.; Fluerasu, Andrei; Hidas, Dean; Wiegart, Lutz

    2017-08-01

    The paper presents an overview of the main functions and new application examples of the "Synchrotron Radiation Workshop" (SRW) code. SRW supports high-accuracy calculations of different types of synchrotron radiation, and simulations of propagation of fully-coherent radiation wavefronts, partially-coherent radiation from a finite-emittance electron beam of a storage ring source, and time-/frequency-dependent radiation pulses of a free-electron laser, through X-ray optical elements of a beamline. An extended library of physical-optics "propagators" for different types of reflective, refractive and diffractive X-ray optics with its typical imperfections, implemented in SRW, enable simulation of practically any X-ray beamline in a modern light source facility. The high accuracy of calculation methods used in SRW allows for multiple applications of this code, not only in the area of development of instruments and beamlines for new light source facilities, but also in areas such as electron beam diagnostics, commissioning and performance benchmarking of insertion devices and individual X-ray optical elements of beamlines. Applications of SRW in these areas, facilitating development and advanced commissioning of beamlines at the National Synchrotron Light Source II (NSLS-II), are described.

  13. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  14. Free Energy Gap and Statistical Thermodynamic Fidelity of DNA Codes

    DTIC Science & Technology

    2007-10-01

    reverse-complement unless otherwise stated. For strand x, let Nx denote its complement. A (perfect) Watson - Crick duplex is the joining of complement...is possible for complementary sequences to form a non-perfectly aligned duplex, we will call any x W Nx duplex a Watson - Crick (WC) duplex. Two...DATES COVERED (From - To) 4. TITLE AND SUBTITLE FREE ENERGY GAP AND STATISTICAL THERMODYNAMIC FIDELITY OF DNA CODES 5a. CONTRACT NUMBER FA8750-07

  15. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  16. Computational/Experimental Aeroheating Predictions for X-33. Phase 2; Vehicle

    NASA Technical Reports Server (NTRS)

    Hamilton, H. Harris, II; Weilmuenster, K. James; Horvath, Thomas J.; Berry, Scott A.

    1998-01-01

    Laminar and turbulent heating-rate calculations from an "engineering" code and laminar calculations from a "benchmark" Navier-Stokes code are compared with experimental wind-tunnel data obtained on several candidate configurations for the X-33 Phase 2 flight vehicle. The experimental data were obtained at a Mach number of 6 and a freestream Reynolds number ranging from 1 to 8 x 10(exp 6)/ft. Comparisons are presented along the windward symmetry plane and in a circumferential direction around the body at several axial stations at angles of attack from 20 to 40 deg. The experimental results include both laminar and turbulent flow. For the highest angle of attack some of the measured heating data exhibited a "non-laminar" behavior which caused the heating to increase above the laminar level long before "classical" transition to turbulent flow was observed. This trend was not observed at the lower angles of attack. When the flow was laminar, both codes predicted the heating along the windward symmetry plane reasonably well but under-predicted the heating in the chine region. When the flow was turbulent the LATCH code accurately predicted the measured heating rates. Both codes were used to calculate heating rates over the X-33 vehicle at the peak heating point on the design trajectory and they were found to be in very good agreement over most of the vehicle windward surface.

  17. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants

    PubMed Central

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings. PMID:27242491

  18. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants.

    PubMed

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders-attention to detail and systemizing-may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings.

  19. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  20. Analysis of a Distributed Pulse Power System Using a Circuit Analysis Code

    DTIC Science & Technology

    1979-06-01

    dose rate was then integrated to give a number that could be compared with measure- ments made using thermal luminescent dosimeters ( TLD ’ s). Since...NM 8 7117 AND THE BDM CORPORATION, ALBUQUERQUE, NM 87106 Abstract A sophisticated computer code (SCEPTRE), used to analyze electronic circuits...computer code (SCEPTRE), used to analyze electronic circuits, was used to evaluate the performance of a large flash X-ray machine. This device was

  1. The X chromosome in space.

    PubMed

    Jégu, Teddy; Aeby, Eric; Lee, Jeannie T

    2017-06-01

    Extensive 3D folding is required to package a genome into the tiny nuclear space, and this packaging must be compatible with proper gene expression. Thus, in the well-hierarchized nucleus, chromosomes occupy discrete territories and adopt specific 3D organizational structures that facilitate interactions between regulatory elements for gene expression. The mammalian X chromosome exemplifies this structure-function relationship. Recent studies have shown that, upon X-chromosome inactivation, active and inactive X chromosomes localize to different subnuclear positions and adopt distinct chromosomal architectures that reflect their activity states. Here, we review the roles of long non-coding RNAs, chromosomal organizational structures and the subnuclear localization of chromosomes as they relate to X-linked gene expression.

  2. Total x-ray power measurements in the Sandia LIGA program.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Michael E.; Ting, Aili

    2005-08-01

    Total X-ray power measurements using aluminum block calorimetry and other techniques were made at LIGA X-ray scanner synchrotron beamlines located at both the Advanced Light Source (ALS) and the Advanced Photon Source (APS). This block calorimetry work was initially performed on the LIGA beamline 3.3.1 of the ALS to provide experimental checks of predictions of the LEX-D (LIGA Exposure- Development) code for LIGA X-ray exposures, version 7.56, the version of the code in use at the time calorimetry was done. These experiments showed that it was necessary to use bend magnet field strengths and electron storage ring energies different frommore » the default values originally in the code in order to obtain good agreement between experiment and theory. The results indicated that agreement between LEX-D predictions and experiment could be as good as 5% only if (1) more accurate values of the ring energies, (2) local values of the magnet field at the beamline source point, and (3) the NIST database for X-ray/materials interactions were used as code inputs. These local magnetic field value and accurate ring energies, together with NIST database, are now defaults in the newest release of LEX-D, version 7.61. Three dimensional simulations of the temperature distributions in the aluminum calorimeter block for a typical ALS power measurement were made with the ABAQUS code and found to be in good agreement with the experimental temperature data. As an application of the block calorimetry technique, the X-ray power exiting the mirror in place at a LIGA scanner located at the APS beamline 10 BM was measured with a calorimeter similar to the one used at the ALS. The overall results at the APS demonstrated the utility of calorimetry in helping to characterize the total X-ray power in LIGA beamlines. In addition to the block calorimetry work at the ALS and APS, a preliminary comparison of the use of heat flux sensors, photodiodes and modified beam calorimeters as total X-ray power

  3. Optimizing fusion PIC code performance at scale on Cori Phase 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, T. S.; Deslippe, J.

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less

  4. Code comparison for accelerator design and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-01-01

    We present a comparison between results obtained from standard accelerator physics codes used for the design and analysis of synchrotrons and storage rings, with programs SYNCH, MAD, HARMON, PATRICIA, PATPET, BETA, DIMAD, MARYLIE and RACE-TRACK. In our analysis we have considered 5 (various size) lattices with large and small angles including AGS Booster (10/degree/ bend), RHIC (2.24/degree/), SXLS, XLS (XUV ring with 45/degree/ bend) and X-RAY rings. The differences in the integration methods used and the treatment of the fringe fields in these codes could lead to different results. The inclusion of nonlinear (e.g., dipole) terms may be necessary inmore » these calculations specially for a small ring. 12 refs., 6 figs., 10 tabs.« less

  5. A fully decompressed synthetic bacteriophage øX174 genome assembled and archived in yeast.

    PubMed

    Jaschke, Paul R; Lieberman, Erica K; Rodriguez, Jon; Sierra, Adrian; Endy, Drew

    2012-12-20

    The 5386 nucleotide bacteriophage øX174 genome has a complicated architecture that encodes 11 gene products via overlapping protein coding sequences spanning multiple reading frames. We designed a 6302 nucleotide synthetic surrogate, øX174.1, that fully separates all primary phage protein coding sequences along with cognate translation control elements. To specify øX174.1f, a decompressed genome the same length as wild type, we truncated the gene F coding sequence. We synthesized DNA encoding fragments of øX174.1f and used a combination of in vitro- and yeast-based assembly to produce yeast vectors encoding natural or designer bacteriophage genomes. We isolated clonal preparations of yeast plasmid DNA and transfected E. coli C strains. We recovered viable øX174 particles containing the øX174.1f genome from E. coli C strains that independently express full-length gene F. We expect that yeast can serve as a genomic 'drydock' within which to maintain and manipulate clonal lineages of other obligate lytic phage. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. X-ray Spectral Formation In High-mass X-ray Binaries: The Case Of Vela X-1

    NASA Astrophysics Data System (ADS)

    Akiyama, Shizuka; Mauche, C. W.; Liedahl, D. A.; Plewa, T.

    2007-05-01

    We are working to develop improved models of radiatively-driven mass flows in the presence of an X-ray source -- such as in X-ray binaries, cataclysmic variables, and active galactic nuclei -- in order to infer the physical properties that determine the X-ray spectra of such systems. The models integrate a three-dimensional time-dependent hydrodynamics capability (FLASH); a comprehensive and uniform set of atomic data, improved calculations of the line force multiplier that account for X-ray photoionization and non-LTE population kinetics, and X-ray emission-line models appropriate to X-ray photoionized plasmas (HULLAC); and a Monte Carlo radiation transport code that simulates Compton scattering and recombination cascades following photoionization. As a test bed, we have simulated a high-mass X-ray binary with parameters appropriate to Vela X-1. While the orbital and stellar parameters of this system are well constrained, the physics of X-ray spectral formation is less well understood because the canonical analytical wind velocity profile of OB stars does not account for the dynamical and radiative feedback effects due to the rotation of the system and to the irradiation of the stellar wind by X-rays from the neutron star. We discuss the dynamical wind structure of Vela X-1 as determined by the FLASH simulation, where in the binary the X-ray emission features originate, and how the spatial and spectral properties of the X-ray emission features are modified by Compton scattering, photoabsorption, and fluorescent emission. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  7. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  8. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  9. Development of a new EMP code at LANL

    NASA Astrophysics Data System (ADS)

    Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.

    2006-05-01

    A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.

  10. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  11. Design Considerations of a Virtual Laboratory for Advanced X-ray Sources

    NASA Astrophysics Data System (ADS)

    Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.

    2004-11-01

    The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.

  12. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  13. Code-division-multiplexed readout of large arrays of TES microcalorimeters

    NASA Astrophysics Data System (ADS)

    Morgan, K. M.; Alpert, B. K.; Bennett, D. A.; Denison, E. V.; Doriese, W. B.; Fowler, J. W.; Gard, J. D.; Hilton, G. C.; Irwin, K. D.; Joe, Y. I.; O'Neil, G. C.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Swetz, D. S.

    2016-09-01

    Code-division multiplexing (CDM) offers a path to reading out large arrays of transition edge sensor (TES) X-ray microcalorimeters with excellent energy and timing resolution. We demonstrate the readout of X-ray TESs with a 32-channel flux-summed code-division multiplexing circuit based on superconducting quantum interference device (SQUID) amplifiers. The best detector has energy resolution of 2.28 ± 0.12 eV FWHM at 5.9 keV and the array has mean energy resolution of 2.77 ± 0.02 eV over 30 working sensors. The readout channels are sampled sequentially at 160 ns/row, for an effective sampling rate of 5.12 μs/channel. The SQUID amplifiers have a measured flux noise of 0.17 μΦ0/√Hz (non-multiplexed, referred to the first stage SQUID). The multiplexed noise level and signal slew rate are sufficient to allow readout of more than 40 pixels per column, making CDM compatible with requirements outlined for future space missions. Additionally, because the modulated data from the 32 SQUID readout channels provide information on each X-ray event at the row rate, our CDM architecture allows determination of the arrival time of an X-ray event to within 275 ns FWHM with potential benefits in experiments that require detection of near-coincident events.

  14. Four-Dimensional Continuum Gyrokinetic Code: Neoclassical Simulation of Fusion Edge Plasmas

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2005-10-01

    We are developing a continuum gyrokinetic code, TEMPEST, to simulate edge plasmas. Our code represents velocity space via a grid in equilibrium energy and magnetic moment variables, and configuration space via poloidal magnetic flux and poloidal angle. The geometry is that of a fully diverted tokamak (single or double null) and so includes boundary conditions for both closed magnetic flux surfaces and open field lines. The 4-dimensional code includes kinetic electrons and ions, and electrostatic field-solver options, and simulates neoclassical transport. The present implementation is a Method of Lines approach where spatial finite-differences (higher order upwinding) and implicit time advancement are used. We present results of initial verification and validation studies: transition from collisional to collisionless limits of parallel end-loss in the scrape-off layer, self-consistent electric field, and the effect of the real X-point geometry and edge plasma conditions on the standard neoclassical theory, including a comparison of our 4D code with other kinetic neoclassical codes and experiments.

  15. The Sensitivity of Coded Mask Telescopes

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald K.

    2008-01-01

    Simple formulae are often used to estimate the sensitivity of coded mask X-ray or gamma-ray telescopes, but t,hese are strictly only applicable if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given which allows the calculation of the sensitivity. We consider certain aspects of the optimisation of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels.

  16. Combinatorial neural codes from a mathematical coding theory perspective.

    PubMed

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  17. Secure and Practical Defense Against Code-Injection Attacks using Software Dynamic Translation

    DTIC Science & Technology

    2006-06-16

    Cache inst1 inst2 … instx inst3 inst4 cmpl %eax,%ecx trampoline Code Fragment1 inst7 inst8 … trampoline Code Fragment2 Context Switch Fetch Decode...inst4 cmpl %eax,%ecx bne L4 inst5 inst6 … jmp L8 L4: inst7 inst8 … Application Text CFn CFn+1 CFn+2 CFn+3 CFn+4 CFn+5 CFn+x inst5 inst6 … trampoline

  18. Improvements in the MGA Code Provide Flexibility and Better Error Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruhter, W D; Kerr, J

    2005-05-26

    The Multi-Group Analysis (MGA) code is widely used to determine nondestructively the relative isotopic abundances of plutonium by gamma-ray spectrometry. MGA users have expressed concern about the lack of flexibility and transparency in the code. Users often have to ask the code developers for modifications to the code to accommodate new measurement situations, such as additional peaks being present in the plutonium spectrum or expected peaks being absent. We are testing several new improvements to a prototype, general gamma-ray isotopic analysis tool with the intent of either revising or replacing the MGA code. These improvements will give the user themore » ability to modify, add, or delete the gamma- and x-ray energies and branching intensities used by the code in determining a more precise gain and in the determination of the relative detection efficiency. We have also fully integrated the determination of the relative isotopic abundances with the determination of the relative detection efficiency to provide a more accurate determination of the errors in the relative isotopic abundances. We provide details in this paper on these improvements and a comparison of results obtained with current versions of the MGA code.« less

  19. Filter-fluorescer measurement of low-voltage simulator x-ray energy spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldwin, G.T.; Craven, R.E.

    X-ray energy spectra of the Maxwell Laboratories MBS and Physics International Pulserad 737 were measured using an eight-channel filter-fluorescer array. The PHOSCAT computer code was used to calculate channel response functions, and the UFO code to unfold spectrum.

  20. Delayed photo-emission model for beam optics codes

    DOE PAGES

    Jensen, Kevin L.; Petillo, John J.; Panagos, Dimitrios N.; ...

    2016-11-22

    Future advanced light sources and x-ray Free Electron Lasers require fast response from the photocathode to enable short electron pulse durations as well as pulse shaping, and so the ability to model delays in emission is needed for beam optics codes. The development of a time-dependent emission model accounting for delayed photoemission due to transport and scattering is given, and its inclusion in the Particle-in-Cell code MICHELLE results in changes to the pulse shape that are described. Furthermore, the model is applied to pulse elongation of a bunch traversing an rf injector, and to the smoothing of laser jitter onmore » a short pulse.« less

  1. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  2. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    NASA Astrophysics Data System (ADS)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  3. Free Energy Gap and Statistical Thermodynamic Fidelity of DNA Codes (Postprint)

    DTIC Science & Technology

    2007-01-01

    reverse-complement unless otherwise stated. For strand x, let Nx denote its complement. A (perfect) Watson - Crick duplex is the joining of complement...is possible for complementary sequences to form a non-perfectly aligned duplex, we will call any x W Nx duplex a Watson - Crick (WC) duplex. Two...DATES COVERED (From - To) 4. TITLE AND SUBTITLE FREE ENERGY GAP AND STATISTICAL THERMODYNAMIC FIDELITY OF DNA CODES 5a. CONTRACT NUMBER FA8750-07

  4. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  5. Large-scale transmission-type multifunctional anisotropic coding metasurfaces in millimeter-wave frequencies

    NASA Astrophysics Data System (ADS)

    Cui, Tie Jun; Wu, Rui Yuan; Wu, Wei; Shi, Chuan Bo; Li, Yun Bo

    2017-10-01

    We propose fast and accurate designs to large-scale and low-profile transmission-type anisotropic coding metasurfaces with multiple functions in the millimeter-wave frequencies based on the antenna-array method. The numerical simulation of an anisotropic coding metasurface with the size of 30λ × 30λ by the proposed method takes only 20 min, which however cannot be realized by commercial software due to huge memory usage in personal computers. To inspect the performance of coding metasurfaces in the millimeter-wave band, the working frequency is chosen as 60 GHz. Based on the convolution operations and holographic theory, the proposed multifunctional anisotropic coding metasurface exhibits different effects excited by y-polarized and x-polarized incidences. This study extends the frequency range of coding metasurfaces, filling the gap between microwave and terahertz bands, and implying promising applications in millimeter-wave communication and imaging.

  6. Evaluation of a visual layering methodology for colour coding control room displays.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2002-07-01

    Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.

  7. X-Ray, EUV, UV and Optical Emissivities of Astrophysical Plasmas

    NASA Technical Reports Server (NTRS)

    Raymond, John C.; West, Donald (Technical Monitor)

    2000-01-01

    This grant primarily covered the development of the thermal X-ray emission model code called APEC, which is meant to replace the Raymond and Smith (1977) code. The new code contains far more spectral lines and a great deal of updated atomic data. The code is now available (http://hea-www.harvard.edu/APEC), though new atomic data is still being added, particularly at longer wavelengths. While initial development of the code was funded by this grant, current work is carried on by N. Brickhouse, R. Smith and D. Liedahl under separate funding. Over the last five years, the grant has provided salary support for N. Brickhouse, R. Smith, a summer student (L. McAllister), an SAO predoctoral fellow (A. Vasquez), and visits by T. Kallman, D. Liedahl, P. Ghavamian, J.M. Laming, J. Li, P. Okeke, and M. Martos. In addition to the code development, the grant supported investigations into X-ray and UV spectral diagnostics as applied to shock waves in the ISM, accreting black holes and white dwarfs, and stellar coronae. Many of these efforts are continuing. Closely related work on the shock waves and coronal mass ejections in the solar corona has grown out of the efforts supported by the grant.

  8. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  9. A Simple X-Y Scanner.

    ERIC Educational Resources Information Center

    Halse, M. R.; Hudson, W. J.

    1986-01-01

    Describes an X-Y scanner used to create acoustic holograms. Scanner is computer controlled and can be adapted to digitize pictures. Scanner geometry is discussed. An appendix gives equipment details. The control program in ATOM BASIC and 6502 machine code is available from the authors. (JM)

  10. Coded DS-CDMA Systems with Iterative Channel Estimation and no Pilot Symbols

    DTIC Science & Technology

    2010-08-01

    ar X iv :1 00 8. 31 96 v1 [ cs .I T ] 1 9 A ug 2 01 0 1 Coded DS - CDMA Systems with Iterative Channel Estimation and no Pilot Symbols Don...sequence code-division multiple-access ( DS - CDMA ) systems with quadriphase-shift keying in which channel estimation, coherent demodulation, and decoding...amplitude, phase, and the interference power spectral density (PSD) due to the combined interference and thermal noise is proposed for DS - CDMA systems

  11. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  12. Status of BOUT fluid turbulence code: improvements and verification

    NASA Astrophysics Data System (ADS)

    Umansky, M. V.; Lodestro, L. L.; Xu, X. Q.

    2006-10-01

    BOUT is an electromagnetic fluid turbulence code for tokamak edge plasma [1]. BOUT performs time integration of reduced Braginskii plasma fluid equations, using spatial discretization in realistic geometry and employing a standard ODE integration package PVODE. BOUT has been applied to several tokamak experiments and in some cases calculated spectra of turbulent fluctuations compared favorably to experimental data. On the other hand, the desire to understand better the code results and to gain more confidence in it motivated investing effort in rigorous verification of BOUT. Parallel to the testing the code underwent substantial modification, mainly to improve its readability and tractability of physical terms, with some algorithmic improvements as well. In the verification process, a series of linear and nonlinear test problems was applied to BOUT, targeting different subgroups of physical terms. The tests include reproducing basic electrostatic and electromagnetic plasma modes in simplified geometry, axisymmetric benchmarks against the 2D edge code UEDGE in real divertor geometry, and neutral fluid benchmarks against the hydrodynamic code LCPFCT. After completion of the testing, the new version of the code is being applied to actual tokamak edge turbulence problems, and the results will be presented. [1] X. Q. Xu et al., Contr. Plas. Phys., 36,158 (1998). *Work performed for USDOE by Univ. Calif. LLNL under contract W-7405-ENG-48.

  13. A Dual Origin of the Xist Gene from a Protein-Coding Gene and a Set of Transposable Elements

    PubMed Central

    Elisaphenko, Eugeny A.; Kolesnikov, Nikolay N.; Shevchenko, Alexander I.; Rogozin, Igor B.; Nesterova, Tatyana B.; Brockdorff, Neil; Zakian, Suren M.

    2008-01-01

    X-chromosome inactivation, which occurs in female eutherian mammals is controlled by a complex X-linked locus termed the X-inactivation center (XIC). Previously it was proposed that genes of the XIC evolved, at least in part, as a result of pseudogenization of protein-coding genes. In this study we show that the key XIC gene Xist, which displays fragmentary homology to a protein-coding gene Lnx3, emerged de novo in early eutherians by integration of mobile elements which gave rise to simple tandem repeats. The Xist gene promoter region and four out of ten exons found in eutherians retain homology to exons of the Lnx3 gene. The remaining six Xist exons including those with simple tandem repeats detectable in their structure have similarity to different transposable elements. Integration of mobile elements into Xist accompanies the overall evolution of the gene and presumably continues in contemporary eutherian species. Additionally we showed that the combination of remnants of protein-coding sequences and mobile elements is not unique to the Xist gene and is found in other XIC genes producing non-coding nuclear RNA. PMID:18575625

  14. A Direct TeX-to-Braille Transcribing Method

    ERIC Educational Resources Information Center

    Papasalouros, Andreas; Tsolomitis, Antonis

    2017-01-01

    The TeX/LaTeX typesetting system is the most wide-spread system for creating documents in Mathematics and Science. However, no reliable tool exists to this day for automatically transcribing documents from the above formats into Braille/Nemeth code. Thus, visually impaired students of related fields do not have access to the bulk of study material…

  15. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  16. X-inactivation and X-reactivation: epigenetic hallmarks of mammalian reproduction and pluripotent stem cells.

    PubMed

    Payer, Bernhard; Lee, Jeannie T; Namekawa, Satoshi H

    2011-08-01

    X-chromosome inactivation is an epigenetic hallmark of mammalian development. Chromosome-wide regulation of the X-chromosome is essential in embryonic and germ cell development. In the male germline, the X-chromosome goes through meiotic sex chromosome inactivation, and the chromosome-wide silencing is maintained from meiosis into spermatids before the transmission to female embryos. In early female mouse embryos, X-inactivation is imprinted to occur on the paternal X-chromosome, representing the epigenetic programs acquired in both parental germlines. Recent advances revealed that the inactive X-chromosome in both females and males can be dissected into two elements: repeat elements versus unique coding genes. The inactive paternal X in female preimplantation embryos is reactivated in the inner cell mass of blastocysts in order to subsequently allow the random form of X-inactivation in the female embryo, by which both Xs have an equal chance of being inactivated. X-chromosome reactivation is regulated by pluripotency factors and also occurs in early female germ cells and in pluripotent stem cells, where X-reactivation is a stringent marker of naive ground state pluripotency. Here we summarize recent progress in the study of X-inactivation and X-reactivation during mammalian reproduction and development as well as in pluripotent stem cells.

  17. Imaging Performance Analysis of Simbol-X with Simulations

    NASA Astrophysics Data System (ADS)

    Chauvin, M.; Roques, J. P.

    2009-05-01

    Simbol-X is an X-Ray telescope operating in formation flight. It means that its optical performances will strongly depend on the drift of the two spacecrafts and its ability to measure these drifts for image reconstruction. We built a dynamical ray tracing code to study the impact of these parameters on the optical performance of Simbol-X (see Chauvin et al., these proceedings). Using the simulation tool we have developed, we have conducted detailed analyses of the impact of different parameters on the imaging performance of the Simbol-X telescope.

  18. Performance of MIMO-OFDM using convolution codes with QAM modulation

    NASA Astrophysics Data System (ADS)

    Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa

    2014-04-01

    Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.

  19. REgolith X-Ray Imaging Spectrometer (REXIS) Aboard NASA’s OSIRIS-REx Mission

    NASA Astrophysics Data System (ADS)

    Hong, JaeSub; Allen, Branden; Grindlay, Jonathan E.; Binzel, Richard P.; Masterson, Rebecca; Inamdar, Niraj K; Chodas, Mark; Smith, Matthew W; Bautz, Mark W.; Kissel, Steven E; Villasenor, Jesus Noel; Oprescu, Antonia

    2014-06-01

    The REgolith X-Ray Imaging Spectrometer (REXIS) is a student-led instrument being designed, built, and operated as a collaborative effort involving MIT and Harvard. It is a part of NASA's OSIRIS-REx mission, which is scheduled for launch in September of 2016 for a rendezvous with, and collection of a sample from the surface of the primitive carbonaceous chondrite-like asteroid 101955 Bennu in 2019. REXIS will determine spatial variations in elemental composition of Bennu's surface through solar-induced X-ray fluorescence. REXIS consists of four X-ray CCDs in the detector plane and an X-ray mask. It is the first coded-aperture X-ray telescope in a planetary mission, which combines the benefit of high X-ray throughput of wide-field collimation with imaging capability of a coded-mask, enabling detection of elemental surface distributions at approximately 50-200 m scales. We present an overview of the REXIS instrument and the expected performance.

  20. Role of non-coding RNAs in non-aging-related neurological disorders.

    PubMed

    Vieira, A S; Dogini, D B; Lopes-Cendes, I

    2018-06-11

    Protein coding sequences represent only 2% of the human genome. Recent advances have demonstrated that a significant portion of the genome is actively transcribed as non-coding RNA molecules. These non-coding RNAs are emerging as key players in the regulation of biological processes, and act as "fine-tuners" of gene expression. Neurological disorders are caused by a wide range of genetic mutations, epigenetic and environmental factors, and the exact pathophysiology of many of these conditions is still unknown. It is currently recognized that dysregulations in the expression of non-coding RNAs are present in many neurological disorders and may be relevant in the mechanisms leading to disease. In addition, circulating non-coding RNAs are emerging as potential biomarkers with great potential impact in clinical practice. In this review, we discuss mainly the role of microRNAs and long non-coding RNAs in several neurological disorders, such as epilepsy, Huntington disease, fragile X-associated ataxia, spinocerebellar ataxias, amyotrophic lateral sclerosis (ALS), and pain. In addition, we give information about the conditions where microRNAs have demonstrated to be potential biomarkers such as in epilepsy, pain, and ALS.

  1. The fast decoding of Reed-Solomon codes using number theoretic transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Welch, L. R.; Truong, T. K.

    1976-01-01

    It is shown that Reed-Solomon (RS) codes can be encoded and decoded by using a fast Fourier transform (FFT) algorithm over finite fields. The arithmetic utilized to perform these transforms requires only integer additions, circular shifts and a minimum number of integer multiplications. The computing time of this transform encoder-decoder for RS codes is less than the time of the standard method for RS codes. More generally, the field GF(q) is also considered, where q is a prime of the form K x 2 to the nth power + 1 and K and n are integers. GF(q) can be used to decode very long RS codes by an efficient FFT algorithm with an improvement in the number of symbols. It is shown that a radix-8 FFT algorithm over GF(q squared) can be utilized to encode and decode very long RS codes with a large number of symbols. For eight symbols in GF(q squared), this transform over GF(q squared) can be made simpler than any other known number theoretic transform with a similar capability. Of special interest is the decoding of a 16-tuple RS code with four errors.

  2. Coded-Aperture X- or gamma -ray telescope with Least- squares image reconstruction. III. Data acquisition and analysis enhancements

    NASA Astrophysics Data System (ADS)

    Kohman, T. P.

    1995-05-01

    The design of a cosmic X- or gamma -ray telescope with least- squares image reconstruction and its simulated operation have been described (Rev. Sci. Instrum. 60, 3396 and 3410 (1989)). Use of an auxiliary open aperture ("limiter") ahead of the coded aperture limits the object field to fewer pixels than detector elements, permitting least-squares reconstruction with improved accuracy in the imaged field; it also yields a uniformly sensitive ("flat") central field. The design has been enhanced to provide for mask-antimask operation. This cancels and eliminates uncertainties in the detector background, and the simulated results have virtually the same statistical accuracy (pixel-by-pixel output-input RMSD) as with a single mask alone. The simulations have been made more realistic by incorporating instrumental blurring of sources. A second-stage least-squares procedure had been developed to determine the precise positions and total fluxes of point sources responsible for clusters of above-background pixels in the field resulting from the first-stage reconstruction. Another program converts source positions in the image plane to celestial coordinates and vice versa, the image being a gnomic projection of a region of the sky.

  3. Human X-Linked genes regionally mapped utilizing X-autosome translocations and somatic cell hybrids.

    PubMed Central

    Shows, T B; Brown, J A

    1975-01-01

    Human genes coding for hypoxanthine phosphoribosyltransferase (HPRT, EC 2.4.2.8; IMP:pyrophosphate phosphoribosyltransferase), glucose-6-phosphate dehydrogenase (G6PD, EC 1.1.1.49; D-glucose-6-phosphate:NADP+ 1-oxidoreductase), and phosphoglycerate kinase (PGK, EC 2.7.2.3; ATP:3-phospho-D-glycerate 1-phosphotransferase) have been assigned to specific regions on the long arm of the X chromosome by somatic cell gentic techniques. Gene assignment and linear order were determined by employing human somatic cells possessing an X/9 translocation or an X/22 translocation in man-mouse cell hybridization studies. The X/9 translocation involved the majority of the X long arm translocated to chromosome 9 and the X/22 translocation involved the distal half of the X long arm translocated to 22. In each case these rearrangements appeared to be reciprocal. Concordant segregation of X-linked enzymes and segments of the X chromosome generated by the translocations indicated assignment of the PGK gene to a proximal long arm region (q12-q22) and the HPRT and G6PD genes to the distal half (q22-qter) of the X long arm. Further evidence suggests a gene order on the X long arm of centromere-PGK-HPRT-G6PD. Images PMID:1056018

  4. Position-based coding and convex splitting for private communication over quantum channels

    NASA Astrophysics Data System (ADS)

    Wilde, Mark M.

    2017-10-01

    The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.

  5. Simulations of vertical disruptions with VDE code: Hiro and Evans currents

    NASA Astrophysics Data System (ADS)

    Li, Xujing; Di Hu Team; Leonid Zakharov Team; Galkin Team

    2014-10-01

    The recently created numerical code VDE for simulations of vertical instability in tokamaks is presented. The numerical scheme uses the Tokamak MHD model, where the plasma inertia is replaced by the friction force, and an adaptive grid numerical scheme. The code reproduces well the surface currents generated at the plasma boundary by the instability. Five regimes of the vertical instability are presented: (1) Vertical instability in a given plasma shaping field without a wall; (2) The same with a wall and magnetic flux ΔΨ|plX< ΔΨ|Xwall(where X corresponds to the X-point of a separatrix); (3) The same with a wall and magnetic flux ΔΨ|plX> ΔΨ|Xwall; (4) Vertical instability without a wall with a tile surface at the plasma path; (5) The same in the presence of a wall and a tile surface. The generation of negative Hiro currents along the tile surface, predicted earlier by the theory and measured on EAST in 2012, is well-reproduced by simulations. In addition, the instability generates the force-free Evans currents at the free plasma surface. The new pattern of reconnection of the plasma with the vacuum magnetic field is discovered. This work is supported by US DoE Contract No. DE-AC02-09-CH11466.

  6. Images multiplexing by code division technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung J.; Rigas, Harriett

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the two-dimensional image is treated as a long one-dimensional vector and the m-sequence is employed to obtain the results. Secondly, the two-dimensional quasi m-array is proposed and used for the code division multiplexing. It is shown that quasi m-array is faster when the image size is 256 x 256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  7. Images Multiplexing By Code Division Technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung Jung; Rigas, Harriett B.

    1990-01-01

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the 2-D image is treated as a long 1-D vector and the m-sequence is employed to obtained the results. Secondly, the 2-D quasi m-array is proposed and used for the code division multiplexing. It is showed that quasi m-array is faster when the image size is 256x256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  8. X-ray modeling for SMILE

    NASA Astrophysics Data System (ADS)

    Sun, T.; Wang, C.; Wei, F.; Liu, Z. Q.; Zheng, J.; Yu, X. Z.; Sembay, S.; Branduardi-Raymont, G.

    2016-12-01

    SMILE (Solar wind Magnetosphere Ionosphere Link Explorer) is a novel mission to explore the coupling of the solar wind-magnetosphere-ionosphere system via providing global images of the magnetosphere and aurora. As the X-ray imaging is a brand new technique applied to study the large scale magnetopause, modeling of the solar wind charge exchange (SWCX) X-ray emissions in the magnetosheath and cusps is vital in various aspects: it helps the design of the Soft X-ray Imager (SXI) on SMILE, selection of satellite orbits, as well as the analysis of expected scientific outcomes. Based on the PPMLR-MHD code, we present the simulation results of the X-ray emissions in geospace during storm time. Both the polar orbit and the Molniya orbit are used. From the X-ray images of the magnetosheath and cusps, the magnetospheric responses to an interplanetary shock and IMF southward turning are analyzed.

  9. Air kerma calibration factors and chamber correction values for PTW soft x-ray, NACP and Roos ionization chambers at very low x-ray energies.

    PubMed

    Ipe, N E; Rosser, K E; Moretti, C J; Manning, J W; Palmer, M J

    2001-08-01

    This paper evaluates the characteristics of ionization chambers for the measurement of absorbed dose to water using very low-energy x-rays. The values of the chamber correction factor, k(ch), used in the IPEMB 1996 code of practice for the UK secondary standard ionization chambers (PTW type M23342 and PTW type M23344), the Roos (PTW type 34001) and NACP electron chambers are derived. The responses in air of the small and large soft x-ray chambers (PTW type M23342 and PTW type M23344) and the NACP and Roos electron ionization chambers were compared. Besides the soft x-ray chambers, the NACP and Roos chambers can be used for very low-energy x-ray dosimetry provided that they are used in the restricted energy range for which their response does not change by more than 5%. The chamber correction factor was found by comparing the absorbed dose to water determined using the dosimetry protocol recommended for low-energy x-rays with that for very low-energy x-rays. The overlap energy range was extended using data from Grosswendt and Knight. Chamber correction factors given in this paper are chamber dependent, varying from 1.037 to 1.066 for a PTW type M23344 chamber, which is very different from a value of unity given in the IPEMB code. However, the values of k(ch) determined in this paper agree with those given in the DIN standard within experimental uncertainty. The authors recommend that the very low-energy section of the IPEMB code is amended to include the most up-to-date values of k(ch).

  10. Kinetic Modeling of Ultraintense X-ray Laser-Matter Interactions

    NASA Astrophysics Data System (ADS)

    Royle, Ryan; Sentoku, Yasuhiko; Mancini, Roberto

    2016-10-01

    Hard x-ray free-electron lasers (XFELs) have had a profound impact on the physical, chemical, and biological sciences. They can produce millijoule x-ray laser pulses just tens of femtoseconds in duration with more than 1012 photons each, making them the brightest laboratory x-ray sources ever produced by several orders of magnitude. An XFEL pulse can be intensified to 1020 W/cm2 when focused to submicron spot sizes, making it possible to isochorically heat solid matter well beyond 100 eV. These characteristics enable XFELs to create and probe well-characterized warm and hot dense plasmas of relevance to HED science, planetary science, laboratory astrophysics, relativistic laser plasmas, and fusion research. Several newly developed atomic physics models including photoionization, Auger ionization, and continuum-lowering have been implemented in a particle-in-cell code, PICLS, which self-consistently solves the x-ray transport, to enable the simulation of the non-LTE plasmas created by ultraintense x-ray laser interactions with solid density matter. The code is validated against the results of several recent experiments and is used to simulate the maximum-intensity x-ray heating of solid iron targets. This work was supported by DOE/OFES under Contract No. DE-SC0008827.

  11. Solving free-plasma-boundary problems with the SIESTA MHD code

    NASA Astrophysics Data System (ADS)

    Sanchez, R.; Peraza-Rodriguez, H.; Reynolds-Barredo, J. M.; Tribaldos, V.; Geiger, J.; Hirshman, S. P.; Cianciosa, M.

    2017-10-01

    SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for 3D magnetic configurations. It is an iterative code that uses the solution obtained by the VMEC code to provide a background coordinate system and an initial guess of the solution. The final solution that SIESTA finds can exhibit magnetic islands and stochastic regions. In its original implementation, SIESTA addressed only fixed-boundary problems. This fixed boundary condition somewhat restricts its possible applications. In this contribution we describe a recent extension of SIESTA that enables it to address free-plasma-boundary situations, opening up the possibility of investigating problems with SIESTA in which the plasma boundary is perturbed either externally or internally. As an illustration, the extended version of SIESTA is applied to a configuration of the W7-X stellarator.

  12. MicroRNAs and intellectual disability (ID) in Down syndrome, X-linked ID, and Fragile X syndrome

    PubMed Central

    Siew, Wei-Hong; Tan, Kai-Leng; Babaei, Maryam Abbaspour; Cheah, Pike-See; Ling, King-Hwa

    2013-01-01

    Intellectual disability (ID) is one of the many features manifested in various genetic syndromes leading to deficits in cognitive function among affected individuals. ID is a feature affected by polygenes and multiple environmental factors. It leads to a broad spectrum of affected clinical and behavioral characteristics among patients. Until now, the causative mechanism of ID is unknown and the progression of the condition is poorly understood. Advancement in technology and research had identified various genetic abnormalities and defects as the potential cause of ID. However, the link between these abnormalities with ID is remained inconclusive and the roles of many newly discovered genetic components such as non-coding RNAs have not been thoroughly investigated. In this review, we aim to consolidate and assimilate the latest development and findings on a class of small non-coding RNAs known as microRNAs (miRNAs) involvement in ID development and progression with special focus on Down syndrome (DS) and X-linked ID (XLID) [including Fragile X syndrome (FXS)]. PMID:23596395

  13. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  14. Parametric color coding of digital subtraction angiography.

    PubMed

    Strother, C M; Bender, F; Deuerling-Zheng, Y; Royalty, K; Pulfer, K A; Baumgart, J; Zellerhoff, M; Aagaard-Kienitz, B; Niemann, D B; Lindstrom, M L

    2010-05-01

    Color has been shown to facilitate both visual search and recognition tasks. It was our purpose to examine the impact of a color-coding algorithm on the interpretation of 2D-DSA acquisitions by experienced and inexperienced observers. Twenty-six 2D-DSA acquisitions obtained as part of routine clinical care from subjects with a variety of cerebrovascular disease processes were selected from an internal data base so as to include a variety of disease states (aneurysms, AVMs, fistulas, stenosis, occlusions, dissections, and tumors). Three experienced and 3 less experienced observers were each shown the acquisitions on a prerelease version of a commercially available double-monitor workstation (XWP, Siemens Healthcare). Acquisitions were presented first as a subtracted image series and then as a single composite color-coded image of the entire acquisition. Observers were then asked a series of questions designed to assess the value of the color-coded images for the following purposes: 1) to enhance their ability to make a diagnosis, 2) to have confidence in their diagnosis, 3) to plan a treatment, and 4) to judge the effect of a treatment. The results were analyzed by using 1-sample Wilcoxon tests. Color-coded images enhanced the ease of evaluating treatment success in >40% of cases (P < .0001). They also had a statistically significant impact on treatment planning, making planning easier in >20% of the cases (P = .0069). In >20% of the examples, color-coding made diagnosis and treatment planning easier for all readers (P < .0001). Color-coding also increased the confidence of diagnosis compared with the use of DSA alone (P = .056). The impact of this was greater for the naïve readers than for the expert readers. At no additional cost in x-ray dose or contrast medium, color-coding of DSA enhanced the conspicuity of findings on DSA images. It was particularly useful in situations in which there was a complex flow pattern and in evaluation of pre- and posttreatment

  15. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    NASA Astrophysics Data System (ADS)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  16. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  17. A broad band X-ray imaging spectrophotometer for astrophysical studies

    NASA Technical Reports Server (NTRS)

    Lum, Kenneth S. K.; Lee, Dong Hwan; Ku, William H.-M.

    1988-01-01

    A broadband X-ray imaging spectrophotometer (BBXRIS) has been built for astrophysical studies. The BBXRIS is based on a large-imaging gas scintillation proportional counter (LIGSPC), a combination of a gas scintillation proportional counter and a multiwire proportional counter, which achieves 8 percent (FWHM) energy resolution and 1.5-mm (FWHM) spatial resolution at 5.9 keV. The LIGSPC can be integrated with a grazing incidence mirror and a coded aperture mask to provide imaging over a broad range of X-ray energies. The results of tests involving the LIGSPC and a coded aperture mask are presented, and possible applications of the BBXRIS are discussed.

  18. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  19. Rate-compatible punctured convolutional codes (RCPC codes) and their applications

    NASA Astrophysics Data System (ADS)

    Hagenauer, Joachim

    1988-04-01

    The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.

  20. Medicine, material science and security: the versatility of the coded-aperture approach.

    PubMed

    Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A

    2014-03-06

    The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.

  1. Amino acid codes in mitochondria as possible clues to primitive codes

    NASA Technical Reports Server (NTRS)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  2. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  3. Laminar Heating Validation of the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Lillard, Randolph P.; Dries, Kevin M.

    2005-01-01

    OVERFLOW, a structured finite difference code, was applied to the solution of hypersonic laminar flow over several configurations assuming perfect gas chemistry. By testing OVERFLOW's capabilities over several configurations encompassing a variety of flow physics a validated laminar heating was produced. Configurations tested were a flat plate at 0 degrees incidence, a sphere, a compression ramp, and the X-38 re-entry vehicle. This variety of test cases shows the ability of the code to predict boundary layer flow, stagnation heating, laminar separation with re-attachment heating, and complex flow over a three-dimensional body. In addition, grid resolutions studies were done to give recommendations for the correct number of off-body points to be applied to generic problems and for wall-spacing values to capture heat transfer and skin friction. Numerical results show good comparison to the test data for all the configurations.

  4. Stabilization of the SIESTA MHD Equilibrium Code Using Rapid Cholesky Factorization

    NASA Astrophysics Data System (ADS)

    Hirshman, S. P.; D'Azevedo, E. A.; Seal, S. K.

    2016-10-01

    The SIESTA MHD equilibrium code solves the discretized nonlinear MHD force F ≡ J X B - ∇p for a 3D plasma which may contain islands and stochastic regions. At each nonlinear evolution step, it solves a set of linearized MHD equations which can be written r ≡ Ax - b = 0, where A is the linearized MHD Hessian matrix. When the solution norm | x| is small enough, the nonlinear force norm will be close to the linearized force norm | r| 0 obtained using preconditioned GMRES. In many cases, this procedure works well and leads to a vanishing nonlinear residual (equilibrium) after several iterations in SIESTA. In some cases, however, | x|>1 results and the SIESTA code has to be restarted to obtain nonlinear convergence. In order to make SIESTA more robust and avoid such restarts, we have implemented a new rapid QR factorization of the Hessian which allows us to rapidly and accurately solve the least-squares problem AT r = 0, subject to the condition | x|<1. This avoids large contributions to the nonlinear force terms and in general makes the convergence sequence of SIESTA much more stable. The innovative rapid QR method is based on a pairwise row factorization of the tri-diagonal Hessian. It provides a complete Cholesky factorization while preserving the memory allocation of A. This work was supported by the U.S. D.O.E. contract DE-AC05-00OR22725.

  5. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  6. A soft X-ray source based on a low divergence, high repetition rate ultraviolet laser

    NASA Astrophysics Data System (ADS)

    Crawford, E. A.; Hoffman, A. L.; Milroy, R. D.; Quimby, D. C.; Albrecht, G. F.

    The CORK code is utilized to evaluate the applicability of low divergence ultraviolet lasers for efficient production of soft X-rays. The use of the axial hydrodynamic code wih one ozone radial expansion to estimate radial motion and laser energy is examined. The calculation of ionization levels of the plasma and radiation rates by employing the atomic physics and radiation model included in the CORK code is described. Computations using the hydrodynamic code to determine the effect of laser intensity, spot size, and wavelength on plasma electron temperature are provided. The X-ray conversion efficiencies of the lasers are analyzed. It is observed that for a 1 GW laser power the X-ray conversion efficiency is a function of spot size, only weakly dependent on pulse length for time scales exceeding 100 psec, and better conversion efficiencies are obtained at shorter wavelengths. It is concluded that these small lasers focused to 30 micron spot sizes and 10 to the 14th W/sq cm intensities are useful sources of 1-2 keV radiation.

  7. Using computational modeling to compare X-ray tube Practical Peak Voltage for Dental Radiology

    NASA Astrophysics Data System (ADS)

    Holanda Cassiano, Deisemar; Arruda Correa, Samanda Cristine; de Souza, Edmilson Monteiro; da Silva, Ademir Xaxier; Pereira Peixoto, José Guilherme; Tadeu Lopes, Ricardo

    2014-02-01

    The Practical Peak Voltage-PPV has been adopted to measure the voltage applied to an X-ray tube. The PPV was recommended by the IEC document and accepted and published in the TRS no. 457 code of practice. The PPV is defined and applied to all forms of waves and is related to the spectral distribution of X-rays and to the properties of the image. The calibration of X-rays tubes was performed using the MCNPX Monte Carlo code. An X-ray tube for Dental Radiology (operated from a single phase power supply) and an X-ray tube used as a reference (supplied from a constant potential power supply) were used in simulations across the energy range of interest of 40 kV to 100 kV. Results obtained indicated a linear relationship between the tubes involved.

  8. Studies of auroral X-ray imaging from high altitude spacecraft

    NASA Technical Reports Server (NTRS)

    Mckenzie, D. L.; Mizera, P. F.; Rice, C. J.

    1980-01-01

    Results of a study of techniques for imaging the aurora from a high altitude satellite at X-ray wavelengths are summarized. The X-ray observations allow the straightforward derivation of the primary auroral X-ray spectrum and can be made at all local times, day and night. Five candidate imaging systems are identified: X-ray telescope, multiple pinhole camera, coded aperture, rastered collimator, and imaging collimator. Examples of each are specified, subject to common weight and size limits which allow them to be intercompared. The imaging ability of each system is tested using a wide variety of sample spectra which are based on previous satellite observations. The study shows that the pinhole camera and coded aperture are both good auroral imaging systems. The two collimated detectors are significantly less sensitive. The X-ray telescope provides better image quality than the other systems in almost all cases, but a limitation to energies below about 4 keV prevents this system from providing the spectra data essential to deriving electron spectra, energy input to the atmosphere, and atmospheric densities and conductivities. The orbit selection requires a tradeoff between spatial resolution and duty cycle.

  9. Reference manual for the POISSON/SUPERFISH Group of Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finitemore » number of points on a mesh in the plane.« less

  10. X-linked hypophosphatemia attributable to pseudoexons of the PHEX gene.

    PubMed

    Christie, P T; Harding, B; Nesbit, M A; Whyte, M P; Thakker, R V

    2001-08-01

    X-linked hypophosphatemia is commonly caused by mutations of the coding region of PHEX (phosphate-regulating gene with homologies to endopeptidases on the X chromosome). However, such PHEX mutations are not detected in approximately one third of X-linked hypophosphatemia patients who may harbor defects in the noncoding or intronic regions. We have therefore investigated 11 unrelated X-linked hypophosphatemia patients in whom coding region mutations had been excluded, for intronic mutations that may lead to mRNA splicing abnormalities, by the use of lymphoblastoid RNA and RT-PCRs. One X-linked hypophosphatemia patient was found to have 3 abnormally large transcripts, resulting from 51-bp, 100-bp, and 170-bp insertions, all of which would lead to missense peptides and premature termination codons. The origin of these transcripts was a mutation (g to t) at position +1268 of intron 7, which resulted in the occurrence of a high quality novel donor splice site (ggaagg to gtaagg). Splicing between this novel donor splice site and 3 preexisting, but normally silent, acceptor splice sites within intron 7 resulted in the occurrences of the 3 pseudoexons. This represents the first report of PHEX pseudoexons and reveals further the diversity of genetic abnormalities causing X-linked hypophosphatemia.

  11. DLA-X Total Quality Management (TQM) Implementation Plan

    DTIC Science & Technology

    1989-07-01

    PAGES TOM (Total Quality Management ), Continuous Process Improvement.( .) 4L-- Administration 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY...NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Pr.-cr,bed by ANSI Std ,,fl.f 296-102 DLA-X TOTAL QUALITY MANAGEMENT (TQM) IMPLEMENTATION PLAN o...application of proven Total Quality Management techniques. Quality Policy: Responsibility for quality is delegated to every employee ;11 DLA-X. Every

  12. The NETL MFiX Suite of multiphase flow models: A brief review and recent applications of MFiX-TFM to fossil energy technologies

    DOE PAGES

    Li, Tingwen; Rogers, William A.; Syamlal, Madhava; ...

    2016-07-29

    Here, the MFiX suite of multiphase computational fluid dynamics (CFD) codes is being developed at U.S. Department of Energy's National Energy Technology Laboratory (NETL). It includes several different approaches to multiphase simulation: MFiX-TFM, a two-fluid (Eulerian–Eulerian) model; MFiX-DEM, an Eulerian fluid model with a Lagrangian Discrete Element Model for the solids phase; and MFiX-PIC, Eulerian fluid model with Lagrangian particle ‘parcels’ representing particle groups. These models are undergoing continuous development and application, with verification, validation, and uncertainty quantification (VV&UQ) as integrated activities. After a brief summary of recent progress in the verification, validation and uncertainty quantification (VV&UQ), this article highlightsmore » two recent accomplishments in the application of MFiX-TFM to fossil energy technology development. First, recent application of MFiX to the pilot-scale KBR TRIG™ Transport Gasifier located at DOE's National Carbon Capture Center (NCCC) is described. Gasifier performance over a range of operating conditions was modeled and compared to NCCC operational data to validate the ability of the model to predict parametric behavior. Second, comparison of code predictions at a detailed fundamental scale is presented studying solid sorbents for the post-combustion capture of CO 2 from flue gas. Specifically designed NETL experiments are being used to validate hydrodynamics and chemical kinetics for the sorbent-based carbon capture process.« less

  13. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  14. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  15. Diagnostic x-ray dosimetry using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Ioppolo, J. L.; Price, R. I.; Tuchyna, T.; Buckley, C. E.

    2002-05-01

    An Electron Gamma Shower version 4 (EGS4) based user code was developed to simulate the absorbed dose in humans during routine diagnostic radiological procedures. Measurements of absorbed dose using thermoluminescent dosimeters (TLDs) were compared directly with EGS4 simulations of absorbed dose in homogeneous, heterogeneous and anthropomorphic phantoms. Realistic voxel-based models characterizing the geometry of the phantoms were used as input to the EGS4 code. The voxel geometry of the anthropomorphic Rando phantom was derived from a CT scan of Rando. The 100 kVp diagnostic energy x-ray spectra of the apparatus used to irradiate the phantoms were measured, and provided as input to the EGS4 code. The TLDs were placed at evenly spaced points symmetrically about the central beam axis, which was perpendicular to the cathode-anode x-ray axis at a number of depths. The TLD measurements in the homogeneous and heterogenous phantoms were on average within 7% of the values calculated by EGS4. Estimates of effective dose with errors less than 10% required fewer numbers of photon histories (1 × 107) than required for the calculation of dose profiles (1 × 109). The EGS4 code was able to satisfactorily predict and thereby provide an instrument for reducing patient and staff effective dose imparted during radiological investigations.

  16. End-to-End Modeling with the Heimdall Code to Scope High-Power Microwave Systems

    DTIC Science & Technology

    2007-06-01

    END-TO-END MODELING WITH THE HEIMDALL CODE TO SCOPE HIGH - POWER MICROWAVE SYSTEMS ∗ John A. Swegleξ Savannah River National Laboratory, 743A...describe the expert-system code HEIMDALL, which is used to model full high - power microwave systems using over 60 systems-engineering models, developed in...of our calculations of the mass of a Supersystem producing 500-MW, 15-ns output pulses in the X band for bursts of 1 s , interspersed with 10- s

  17. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  18. X1908+075: An X-Ray Binary with a 4.4 Day Period

    NASA Astrophysics Data System (ADS)

    Wen, Linqing; Remillard, Ronald A.; Bradt, Hale V.

    2000-04-01

    X1908+075 is an optically unidentified and highly absorbed X-ray source that appeared in early surveys such as Uhuru, OSO 7, Ariel 5, HEAO-1, and the EXOSAT Galactic Plane Survey. These surveys measured a source intensity in the range 2-12 mcrab at 2-10 keV, and the position was localized to ~0.5d. We use the Rossi X-Ray Timing Explorer (RXTE) All-Sky Monitor (ASM) to confirm our expectation that a particular Einstein/IPC detection (1E 1908.4+0730) provides the correct position for X1908+075. The analysis of the coded mask shadows from the ASM for the position of 1E 1908.4+0730 yields a persistent intensity ~8 mcrab (1.5-12 keV) over a 3 yr interval beginning in 1996 February. Furthermore, we detect a period of 4.400+/-0.001 days with a false-alarm probability less than 10-7. The folded light curve is roughly sinusoidal, with an amplitude that is 26% of the mean flux. The X-ray period may be attributed to the scattering and absorption of X-rays through a stellar wind combined with the orbital motion in a binary system. We suggest that X1908+075 is an X-ray binary with a high-mass companion star.

  19. Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base

    NASA Astrophysics Data System (ADS)

    Savage, B.; Snoke, J. A.

    2017-12-01

    The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years

  20. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  1. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  2. New quantum codes derived from a family of antiprimitive BCH codes

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  3. Surface acoustic wave coding for orthogonal frequency coded devices

    NASA Technical Reports Server (NTRS)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  4. Validation of computational code UST3D by the example of experimental aerodynamic data

    NASA Astrophysics Data System (ADS)

    Surzhikov, S. T.

    2017-02-01

    Numerical simulation of the aerodynamic characteristics of the hypersonic vehicles X-33 and X-34 as well as spherically blunted cone is performed using the unstructured meshes. It is demonstrated that the numerical predictions obtained with the computational code UST3D are in acceptable agreement with the experimental data for approximate parameters of the geometry of the hypersonic vehicles and in excellent agreement with data for blunted cone.

  5. Easy Web Interfaces to IDL Code for NSTX Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W.M. Davis

    Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of "Web Tools" for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptationmore » of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because o f the familiar interface of the web browser, and not needing X-windows, accounts, passwords, etc. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.« less

  6. Early MIMD experience on the CRAY X-MP

    NASA Astrophysics Data System (ADS)

    Rhoades, Clifford E.; Stevens, K. G.

    1985-07-01

    This paper describes some early experience with converting four physics simulation programs to the CRAY X-MP, a current Multiple Instruction, Multiple Data (MIMD) computer consisting of two processors each with an architecture similar to that of the CRAY-1. As a multi-processor, the CRAY X-MP together with the high speed Solid-state Storage Device (SSD) in an ideal machine upon which to study MIMD algorithms for solving the equations of mathematical physics because it is fast enough to run real problems. The computer programs used in this study are all FORTRAN versions of original production codes. They range in sophistication from a one-dimensional numerical simulation of collisionless plasma to a two-dimensional hydrodynamics code with heat flow to a couple of three-dimensional fluid dynamics codes with varying degrees of viscous modeling. Early research with a dual processor configuration has shown speed-ups ranging from 1.55 to 1.98. It has been observed that a few simple extensions to FORTRAN allow a typical programmer to achieve a remarkable level of efficiency. These extensions involve the concept of memory local to a concurrent subprogram and memory common to all concurrent subprograms.

  7. Solving Mendelian Mysteries: The Non-coding Genome May Hold the Key.

    PubMed

    Valente, Enza Maria; Bhatia, Kailash P

    2018-02-22

    Despite revolutionary advances in sequencing approaches, many mendelian disorders have remained unexplained. In this issue of Cell, Aneichyk et al. combine genomic and cell-type-specific transcriptomic data to causally link a non-coding mutation in the ubiquitous TAF1 gene to X-linked dystonia-parkinsonism. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Realization of ActiveX control based on ATL in VC 2008

    NASA Astrophysics Data System (ADS)

    Li, Shuhua; Tie, Yong

    2011-10-01

    ActiveX has a key role in web development, this paper realizes the classical program Polygon in the newest Visual C++ environment 2008 and tests each function of control in ActiveX Control Test Container. After that web code is created rapidly via ActiveX Control Pad and it is checked in HTML. Development process and key point attention are summarized systematically which can guide the related developers.

  9. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    ERIC Educational Resources Information Center

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  10. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  11. QR Codes 101

    ERIC Educational Resources Information Center

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  12. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  13. Making your code citable with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2016-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.

  14. 76 FR 77549 - Lummi Nation-Title 20-Code of Laws-Liquor Code

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Lummi Nation--Title 20--Code of Laws--Liquor... amendment to Lummi Nation's Title 20--Code of Laws--Liquor Code. The Code regulates and controls the... this amendment to Title 20--Lummi Nation Code of Laws--Liquor Code by Resolution 2011-038 on March 1...

  15. The crystal structure of the new ternary antimonide Dy 3Cu 20+xSb 11-x ( x≈2)

    NASA Astrophysics Data System (ADS)

    Fedyna, L. O.; Bodak, O. I.; Fedorchuk, A. O.; Tokaychuk, Ya. O.

    2005-06-01

    New ternary antimonide Dy 3Cu 20+xSb 11-x ( x≈2) was synthesized and its crystal structure was determined by direct methods from X-ray powder diffraction data (diffractometer DRON-3M, Cu Kα-radiation, R=6.99%,R=12.27%,R=11.55%). The compound crystallizes with the own cubic structure type: space group F 4¯ 3m, Pearson code cF272, a=16.6150(2) Å,Z=8. The structure of the Dy 3Cu 20Sb 11-x ( x≈2) can be obtained from the structure type BaHg 11 by doubling of the lattice parameter and subtraction of 16 atoms. The studied structure was compared with the structures of known compounds, which crystallize in the same space group with similar cell parameters.

  16. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  17. An experiment to assess the cost-benefits of code inspections in large scale software development

    NASA Technical Reports Server (NTRS)

    Porter, A.; Siy, H.; Toman, C. A.; Votta, L. G.

    1994-01-01

    This experiment (currently in progress) is designed to measure costs and benefits of different code inspection methods. It is being performed with a real development team writing software for a commercial product. The dependent variables for each code unit's inspection are the elapsed time and the number of defects detected. We manipulate the method of inspection by randomly assigning reviewers, varying the number of reviewers and the number of teams, and, when using more than one team, randomly assigning author repair and non-repair of detected defects between code inspections. After collecting and analyzing the first 17 percent of the data, we have discovered several interesting facts about reviewers, about the defects recorded during reviewer preparation and during the inspection collection meeting, and about the repairs that are eventually made. (1) Only 17 percent of the defects that reviewers record in their preparations are true defects that are later repaired. (2) Defects recorded at the inspection meetings fall into three categories: 18 percent false positives requiring no author repair, 57 percent soft maintenance where the author makes changes only for readability or code standard enforcement, and 25 percent true defects requiring repair. (3) The median elapsed calendar time for code inspections is 10 working days - 8 working days before the collection meeting and 2 after. (4) In the collection meetings, 31 percent of the defects discovered by reviewers during preparation are suppressed. (5) Finally, 33 percent of the true defects recorded are discovered at the collection meetings and not during any reviewer's preparation. The results to date suggest that inspections with two sessions (two different teams) of two reviewers per session (2sX2p) are the most effective. These two-session inspections may be performed with author repair or with no author repair between the two sessions. We are finding that the two-session, two-person with repair (2sX2p

  18. Fission yield and criticality excursion code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, A.

    2000-06-30

    The ANSI/ANS 8.3 standard allows a maximum yield not to exceed 2 x 10 fissions to calculate requiring the alarm system to be effective. It is common practice to use this allowance or to develop some other yield based on past criticality accident history or excursion experiments. The literature on the subject of yields discusses maximum yields larger and somewhat smaller than the ANS 8.3 permissive value. The ability to model criticality excursions and vary the various parameters to determine a credible maximum yield for operational specific cases has been available for some time but is not in common usemore » by criticality safety specialists. The topic of yields for various solution, metal, oxide powders, etc. in various geometry's and containers has been published by laboratory specialists or university staff and students for many decades but have not been available to practitioners. The need for best-estimate calculations of fission yields with a well-validated criticality excursion code has long been recognized. But no coordinated effort has been made so far to develop a generalized and well-validated excursion code for different types of systems. In this paper, the current practices to estimate fission yields are summarized along with its shortcomings for the 12-Rad zone (at SRS) and Criticality Alarm System (CAS) calculations. Finally the need for a user-friendly excursion code is reemphasized.« less

  19. Modeling of Dynamic Behavior of Carbon Fiber-Reinforced Polymer (CFRP) Composite under X-ray Radiation.

    PubMed

    Zhang, Kun; Tang, Wenhui; Fu, Kunkun

    2018-01-16

    Carbon fiber-reinforced polymer (CFRP) composites have been increasingly used in spacecraft applications. Spacecraft may encounter highenergy-density X-ray radiation in outer space that can cause severe damage. To protect spacecraft from such unexpected damage, it is essential to predict the dynamic behavior of CFRP composites under X-ray radiation. In this study, we developed an in-house three-dimensional explicit finite element (FEM) code to investigate the dynamic responses of CFRP composite under X-ray radiation for the first time, by incorporating a modified PUFF equation-of-state. First, the blow-off impulse (BOI) momentum of an aluminum panel was predicted by our FEM code and compared with an existing radiation experiment. Then, the FEM code was utilized to determine the dynamic behavior of a CFRP composite under various radiation conditions. It was found that the numerical result was comparable with the experimental one. Furthermore, the CFRP composite was more effective than the aluminum panel in reducing radiation-induced pressure and BOI momentum. The numerical results also revealed that a 1 keV X-ray led to vaporization of surface materials and a high-magnitude compressive stress wave, whereas a low-magnitude stress wave was generated with no surface vaporization when a 3 keV X-ray was applied.

  20. Modeling of Dynamic Behavior of Carbon Fiber-Reinforced Polymer (CFRP) Composite under X-ray Radiation

    PubMed Central

    Zhang, Kun; Tang, Wenhui; Fu, Kunkun

    2018-01-01

    Carbon fiber-reinforced polymer (CFRP) composites have been increasingly used in spacecraft applications. Spacecraft may encounter highenergy-density X-ray radiation in outer space that can cause severe damage. To protect spacecraft from such unexpected damage, it is essential to predict the dynamic behavior of CFRP composites under X-ray radiation. In this study, we developed an in-house three-dimensional explicit finite element (FEM) code to investigate the dynamic responses of CFRP composite under X-ray radiation for the first time, by incorporating a modified PUFF equation-of-state. First, the blow-off impulse (BOI) momentum of an aluminum panel was predicted by our FEM code and compared with an existing radiation experiment. Then, the FEM code was utilized to determine the dynamic behavior of a CFRP composite under various radiation conditions. It was found that the numerical result was comparable with the experimental one. Furthermore, the CFRP composite was more effective than the aluminum panel in reducing radiation-induced pressure and BOI momentum. The numerical results also revealed that a 1 keV X-ray led to vaporization of surface materials and a high-magnitude compressive stress wave, whereas a low-magnitude stress wave was generated with no surface vaporization when a 3 keV X-ray was applied. PMID:29337891

  1. The pig X and Y Chromosomes: structure, sequence, and evolution

    PubMed Central

    Skinner, Benjamin M.; Sargent, Carole A.; Churcher, Carol; Hunt, Toby; Herrero, Javier; Loveland, Jane E.; Dunn, Matt; Louzada, Sandra; Fu, Beiyuan; Chow, William; Gilbert, James; Austin-Guest, Siobhan; Beal, Kathryn; Carvalho-Silva, Denise; Cheng, William; Gordon, Daria; Grafham, Darren; Hardy, Matt; Harley, Jo; Hauser, Heidi; Howden, Philip; Howe, Kerstin; Lachani, Kim; Ellis, Peter J.I.; Kelly, Daniel; Kerry, Giselle; Kerwin, James; Ng, Bee Ling; Threadgold, Glen; Wileman, Thomas; Wood, Jonathan M.D.; Yang, Fengtang; Harrow, Jen; Affara, Nabeel A.; Tyler-Smith, Chris

    2016-01-01

    We have generated an improved assembly and gene annotation of the pig X Chromosome, and a first draft assembly of the pig Y Chromosome, by sequencing BAC and fosmid clones from Duroc animals and incorporating information from optical mapping and fiber-FISH. The X Chromosome carries 1033 annotated genes, 690 of which are protein coding. Gene order closely matches that found in primates (including humans) and carnivores (including cats and dogs), which is inferred to be ancestral. Nevertheless, several protein-coding genes present on the human X Chromosome were absent from the pig, and 38 pig-specific X-chromosomal genes were annotated, 22 of which were olfactory receptors. The pig Y-specific Chromosome sequence generated here comprises 30 megabases (Mb). A 15-Mb subset of this sequence was assembled, revealing two clusters of male-specific low copy number genes, separated by an ampliconic region including the HSFY gene family, which together make up most of the short arm. Both clusters contain palindromes with high sequence identity, presumably maintained by gene conversion. Many of the ancestral X-related genes previously reported in at least one mammalian Y Chromosome are represented either as active genes or partial sequences. This sequencing project has allowed us to identify genes—both single copy and amplified—on the pig Y Chromosome, to compare the pig X and Y Chromosomes for homologous sequences, and thereby to reveal mechanisms underlying pig X and Y Chromosome evolution. PMID:26560630

  2. X-ray Fluorescence Spectroscopy: the Potential of Astrophysics-developed Techniques

    NASA Astrophysics Data System (ADS)

    Elvis, M.; Allen, B.; Hong, J.; Grindlay, J.; Kraft, R.; Binzel, R. P.; Masterton, R.

    2012-12-01

    X-ray fluorescence from the surface of airless bodies has been studied since the Apollo X-ray fluorescence experiment mapped parts of the lunar surface in 1971-1972. That experiment used a collimated proportional counter with a resolving power of ~1 and a beam size of ~1degree. Filters separated only Mg, Al and SI lines. We review progress in X-ray detectors and imaging for astrophysics and show how these advances enable much more powerful use of X-ray fluorescence for the study of airless bodies. Astrophysics X-ray instrumentation has developed enormously since 1972. Low noise, high quantum efficiency, X-ray CCDs have flown on ASCA, XMM-Newton, the Chandra X-ray Observatory, Swift and Suzaku, and are the workhorses of X-ray astronomy. They normally span 0.5 to ~8 keV with an energy resolution of ~100 eV. New developments in silicon based detectors, especially individual pixel addressable devices, such as CMOS detectors, can withstand many orders of magnitude more radiation than conventional CCDs before degradation. The capability of high read rates provides dynamic range and temporal resolution. Additionally, the rapid read rates minimize shot noise from thermal dark current and optical light. CMOS detectors can therefore run at warmer temperatures and with ultra-thin optical blocking filters. Thin OBFs mean near unity quantum efficiency below 1 keV, thus maximizing response at the C and O lines.such as CMOS detectors, promise advances. X-ray imaging has advanced similarly far. Two types of imager are now available: specular reflection and coded apertures. X-ray mirrors have been flown on the Einstein Observatory, XMM-Newton, Chandra and others. However, as X-ray reflection only occurs at small (~1degree) incidence angles, which then requires long focal lengths (meters), mirrors are not usually practical for planetary missions. Moreover the field of view of X-ray mirrors is comparable to the incident angle, so can only image relatively small regions. More useful

  3. Warp-X: A new exascale computing platform for beam–plasma simulations

    DOE PAGES

    Vay, J. -L.; Almgren, A.; Bell, J.; ...

    2018-01-31

    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less

  4. Warp-X: A new exascale computing platform for beam–plasma simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J. -L.; Almgren, A.; Bell, J.

    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less

  5. xLPR Sim Editor 1.0 User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mariner, Paul E.

    2017-03-01

    The United States Nuclear Regulatory Commission in cooperation with the Electric Power Research Institute contracted Sandia National Laboratories to develop the framework of a probabilistic fracture mechanics assessment code called xLPR ( Extremely Low Probability of Rupture) Version 2.0 . The purpose of xLPR is to evaluate degradation mechanisms in piping systems at nuclear power plants and to predict the probability of rupture. This report is a user's guide for xLPR Sim Editor 1.0 , a graphical user interface for creating and editing the xLPR Version 2.0 input file and for creating, editing, and using the xLPR Version 2.0 databasemore » files . The xLPR Sim Editor, provides a user - friendly way for users to change simulation options and input values, s elect input datasets from xLPR data bases, identify inputs needed for a simulation, and create and modify an input file for xLPR.« less

  6. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.

  7. f1: a code to compute Appell's F1 hypergeometric function

    NASA Astrophysics Data System (ADS)

    Colavecchia, F. D.; Gasaneo, G.

    2004-02-01

    In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential

  8. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  9. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  10. Systematic design and three-dimensional simulation of X-ray FEL oscillator for Shanghai Coherent Light Facility

    NASA Astrophysics Data System (ADS)

    Li, Kai; Deng, Haixiao

    2018-07-01

    The Shanghai Coherent Light Facility (SCLF) is a quasi-continuous wave hard X-ray free electron laser facility, which is currently under construction. Due to the high repetition rate and high-quality electron beams, it is straightforward to consider X-ray free electron laser oscillator (XFELO) operation for the SCLF. In this paper, the main processes for XFELO design, and parameter optimization of the undulator, X-ray cavity, and electron beam are described. A three-dimensional X-ray crystal Bragg diffraction code, named BRIGHT, was introduced for the first time, which can be combined with the GENESIS and OPC codes for the numerical simulations of the XFELO. The performance of the XFELO of the SCLF is investigated and optimized by theoretical analysis and numerical simulation.

  11. Regolith X-Ray Imaging Spectrometer (REXIS) Aboard the OSIRIS-REx Asteroid Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Masterson, R. A.; Chodas, M.; Bayley, L.; Allen, B.; Hong, J.; Biswas, P.; McMenamin, C.; Stout, K.; Bokhour, E.; Bralower, H.; Carte, D.; Chen, S.; Jones, M.; Kissel, S.; Schmidt, F.; Smith, M.; Sondecker, G.; Lim, L. F.; Lauretta, D. S.; Grindlay, J. E.; Binzel, R. P.

    2018-02-01

    The Regolith X-ray Imaging Spectrometer (REXIS) is the student collaboration experiment proposed and built by an MIT-Harvard team, launched aboard NASA's OSIRIS-REx asteroid sample return mission. REXIS complements the scientific investigations of other OSIRIS-REx instruments by determining the relative abundances of key elements present on the asteroid's surface by measuring the X-ray fluorescence spectrum (stimulated by the natural solar X-ray flux) over the range of energies 0.5 to 7 keV. REXIS consists of two components: a main imaging spectrometer with a coded aperture mask and a separate solar X-ray monitor to account for the Sun's variability. In addition to element abundance ratios (relative to Si) pinpointing the asteroid's most likely meteorite association, REXIS also maps elemental abundance variability across the asteroid's surface using the asteroid's rotation as well as the spacecraft's orbital motion. Image reconstruction at the highest resolution is facilitated by the coded aperture mask. Through this operation, REXIS will be the first application of X-ray coded aperture imaging to planetary surface mapping, making this student-built instrument a pathfinder toward future planetary exploration. To date, 60 students at the undergraduate and graduate levels have been involved with the REXIS project, with the hands-on experience translating to a dozen Master's and Ph.D. theses and other student publications.

  12. A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.

    PubMed

    Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H

    2001-03-01

    The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.

  13. X-chromosome inactivation in development and cancer.

    PubMed

    Chaligné, Ronan; Heard, Edith

    2014-08-01

    X-chromosome inactivation represents an epigenetics paradigm and a powerful model system of facultative heterochromatin formation triggered by a non-coding RNA, Xist, during development. Once established, the inactive state of the Xi is highly stable in somatic cells, thanks to a combination of chromatin associated proteins, DNA methylation and nuclear organization. However, sporadic reactivation of X-linked genes has been reported during ageing and in transformed cells and disappearance of the Barr body is frequently observed in cancer cells. In this review we summarise current knowledge on the epigenetic changes that accompany X inactivation and discuss the extent to which the inactive X chromosome may be epigenetically or genetically perturbed in breast cancer. Copyright © 2014 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  14. Mad-X a worthy successor for MAD8?

    NASA Astrophysics Data System (ADS)

    Schmidt, F.

    2006-03-01

    MAD-X is the successor at CERN to MAD8, a program for accelerator design and simulation with a long history. We had to give up on MAD8 since the code had evolved in such a way that the maintenance and upgrades had become increasingly difficult. In particular, the memory management with the Zebra banks seemed outdated. MAD-X was first released in June, 2002. It offers most of the MAD8 functionality, with some additions, corrections, and extensions. The most important of these extensions is the interface to PTC, the Polymorphic Tracking Code by E. Forest. The most relevant new features of MAD-X are: languages: C, Fortran77, and Fortran90; dynamic memory allocation: in the core program written in C; strictly modular organization, modified and extended input language; symplectic and arbitrary exact description of all elements via PTC; Taylor Maps and Normal Form techniques using PTC. It is also important to note that we have adopted a new style for program development and maintenance that relies heavily on active maintenance of modules by the users themselves. Proposals for collaboration as with KEK, Japan and GSI, Germany are therefore very welcome.

  15. A search for outflows from X-ray bright points in coronal holes

    NASA Technical Reports Server (NTRS)

    Mullan, D. J.; Waldron, W. L.

    1986-01-01

    Properties of X-ray bright points using two of the instruments on Solar Maximum Mission were investigated. The mass outflows from magnetic regions were modeled using a two dimensional MHD code. It was concluded that mass can be detected from X-ray bright points provided that the magnetic topology is favorable.

  16. Study of X-ray photoionized Fe plasma and comparisons with astrophysical modeling codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foord, M E; Heeter, R F; Chung, H

    The charge state distributions of Fe, Na and F are determined in a photoionized laboratory plasma using high resolution x-ray spectroscopy. Independent measurements of the density and radiation flux indicate the ionization parameter {zeta} in the plasma reaches values {zeta} = 20-25 erg cm s{sup -1} under near steady-state conditions. A curve-of-growth analysis, which includes the effects of velocity gradients in a one-dimensional expanding plasma, fits the observed line opacities. Absorption lines are tabulated in the wavelength region 8-17 {angstrom}. Initial comparisons with a number of astrophysical x-ray photoionization models show reasonable agreement.

  17. Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Sjenitzer, Bart L.

    2014-06-01

    To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.

  18. Mechanical code comparator

    DOEpatents

    Peter, Frank J.; Dalton, Larry J.; Plummer, David W.

    2002-01-01

    A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.

  19. Entanglement entropy from tensor network states for stabilizer codes

    NASA Astrophysics Data System (ADS)

    He, Huan; Zheng, Yunqin; Bernevig, B. Andrei; Regnault, Nicolas

    2018-03-01

    In this paper, we present the construction of tensor network states (TNS) for some of the degenerate ground states of three-dimensional (3D) stabilizer codes. We then use the TNS formalism to obtain the entanglement spectrum and entropy of these ground states for some special cuts. In particular, we work out examples of the 3D toric code, the X-cube model, and the Haah code. The latter two models belong to the category of "fracton" models proposed recently, while the first one belongs to the conventional topological phases. We mention the cases for which the entanglement entropy and spectrum can be calculated exactly: For these, the constructed TNS is a singular value decomposition (SVD) of the ground states with respect to particular entanglement cuts. Apart from the area law, the entanglement entropies also have constant and linear corrections for the fracton models, while the entanglement entropies for the toric code models only have constant corrections. For the cuts we consider, the entanglement spectra of these three models are completely flat. We also conjecture that the negative linear correction to the area law is a signature of extensive ground-state degeneracy. Moreover, the transfer matrices of these TNSs can be constructed. We show that the transfer matrices are projectors whose eigenvalues are either 1 or 0. The number of nonzero eigenvalues is tightly related to the ground-state degeneracy.

  20. The EDIT-COMGEOM Code

    DTIC Science & Technology

    1975-09-01

    This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code

  1. Surface code implementation of block code state distillation.

    PubMed

    Fowler, Austin G; Devitt, Simon J; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved [formula: see text] state given 15 input copies. New block code state distillation methods can produce k improved [formula: see text] states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.

  2. Surface code implementation of block code state distillation

    PubMed Central

    Fowler, Austin G.; Devitt, Simon J.; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868

  3. Investigation of structural, electronic, elastic and optical properties of Cd{sub 1-x-y}Zn{sub x}Hg{sub y}Te alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamer, M., E-mail: mehmet.tamer@zirve.edu.tr

    2016-06-15

    Structural, optical and electronic properties and elastic constants of Cd1{sub -x-y}Zn{sub x} Hg{sub y}Te alloys have been studied by employing the commercial code Castep based on density functional theory. The generalized gradient approximation and local density approximation were utilized as exchange correlation. Using elastic constants for compounds, bulk modulus, band gap, Fermi energy and Kramers–Kronig relations, dielectric constants and the refractive index have been found through calculations. Apart from these, X-ray measurements revealed elastic constants and Vegard’s law. It is seen that results obtained from theory and experiments are all in agreement.

  4. Detection of Explosive Devices using X-ray Backscatter Radiation

    NASA Astrophysics Data System (ADS)

    Faust, Anthony A.

    2002-09-01

    It is our goal to develop a coded aperture based X-ray backscatter imaging detector that will provide sufficient speed, contrast and spatial resolution to detect Antipersonnel Landmines and Improvised Explosive Devices (IED). While our final objective is to field a hand-held detector, we have currently constrained ourselves to a design that can be fielded on a small robotic platform. Coded aperture imaging has been used by the observational gamma astronomy community for a number of years. However, it has been the recent advances in the field of medical nuclear imaging which has allowed for the application of the technique to a backscatter scenario. In addition, driven by requirements in medical applications, advances in X-ray detection are continually being made, and detectors are now being produced that are faster, cheaper and lighter than those only a decade ago. With these advances, a coded aperture hand-held imaging system has only recently become a possibility. This paper will begin with an introduction to the technique, identify recent advances which have made this approach possible, present a simulated example case, and conclude with a discussion on future work.

  5. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  6. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    ERIC Educational Resources Information Center

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  7. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  8. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  9. Industrial Code Development

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1991-01-01

    The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.

  10. Girls Who Code Club | College of Engineering & Applied Science

    Science.gov Websites

    A B C D E F G H I J K L M N O P Q R S T U V W X Y Z D2L PAWS Email My UW-System About UWM UWM Jobs D2L PAWS Email My UW-System University of Wisconsin-Milwaukee College ofEngineering & Olympiad Girls Who Code Club FIRST Tech Challenge NSF I-Corps Site of Southeastern Wisconsin UW-Milwaukee

  11. New optimal asymmetric quantum codes constructed from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Lü, Liangdong

    2017-02-01

    In this paper, we propose the construction of asymmetric quantum codes from two families of constacyclic codes over finite field 𝔽q2 of code length n, where for the first family, q is an odd prime power with the form 4t + 1 (t ≥ 1 is integer) or 4t - 1 (t ≥ 2 is integer) and n1 = q2+1 2; for the second family, q is an odd prime power with the form 10t + 3 or 10t + 7 (t ≥ 0 is integer) and n2 = q2+1 5. As a result, families of new asymmetric quantum codes [[n,k,dz/dx

  12. Quantum error-correcting codes from algebraic geometry codes of Castle type

    NASA Astrophysics Data System (ADS)

    Munuera, Carlos; Tenório, Wanderson; Torres, Fernando

    2016-10-01

    We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.

  13. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  14. Schroedinger’s code: Source code availability and transparency in astrophysics

    NASA Astrophysics Data System (ADS)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  15. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  16. A vital sugar code for ricin toxicity.

    PubMed

    Taubenschmid, Jasmin; Stadlmann, Johannes; Jost, Markus; Klokk, Tove Irene; Rillahan, Cory D; Leibbrandt, Andreas; Mechtler, Karl; Paulson, James C; Jude, Julian; Zuber, Johannes; Sandvig, Kirsten; Elling, Ulrich; Marquardt, Thorsten; Thiel, Christian; Koerner, Christian; Penninger, Josef M

    2017-11-01

    Ricin is one of the most feared bioweapons in the world due to its extreme toxicity and easy access. Since no antidote exists, it is of paramount importance to identify the pathways underlying ricin toxicity. Here, we demonstrate that the Golgi GDP-fucose transporter Slc35c1 and fucosyltransferase Fut9 are key regulators of ricin toxicity. Genetic and pharmacological inhibition of fucosylation renders diverse cell types resistant to ricin via deregulated intracellular trafficking. Importantly, cells from a patient with SLC35C1 deficiency are also resistant to ricin. Mechanistically, we confirm that reduced fucosylation leads to increased sialylation of Lewis X structures and thus masking of ricin-binding sites. Inactivation of the sialyltransferase responsible for modifications of Lewis X (St3Gal4) increases the sensitivity of cells to ricin, whereas its overexpression renders cells more resistant to the toxin. Thus, we have provided unprecedented insights into an evolutionary conserved modular sugar code that can be manipulated to control ricin toxicity.

  17. A vital sugar code for ricin toxicity

    PubMed Central

    Taubenschmid, Jasmin; Stadlmann, Johannes; Jost, Markus; Klokk, Tove Irene; Rillahan, Cory D; Leibbrandt, Andreas; Mechtler, Karl; Paulson, James C; Jude, Julian; Zuber, Johannes; Sandvig, Kirsten; Elling, Ulrich; Marquardt, Thorsten; Thiel, Christian; Koerner, Christian; Penninger, Josef M

    2017-01-01

    Ricin is one of the most feared bioweapons in the world due to its extreme toxicity and easy access. Since no antidote exists, it is of paramount importance to identify the pathways underlying ricin toxicity. Here, we demonstrate that the Golgi GDP-fucose transporter Slc35c1 and fucosyltransferase Fut9 are key regulators of ricin toxicity. Genetic and pharmacological inhibition of fucosylation renders diverse cell types resistant to ricin via deregulated intracellular trafficking. Importantly, cells from a patient with SLC35C1 deficiency are also resistant to ricin. Mechanistically, we confirm that reduced fucosylation leads to increased sialylation of Lewis X structures and thus masking of ricin-binding sites. Inactivation of the sialyltransferase responsible for modifications of Lewis X (St3Gal4) increases the sensitivity of cells to ricin, whereas its overexpression renders cells more resistant to the toxin. Thus, we have provided unprecedented insights into an evolutionary conserved modular sugar code that can be manipulated to control ricin toxicity. PMID:28925387

  18. TOOKUIL: A case study in user interface development for safety code application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, D.L.; Harkins, C.K.; Hoole, J.G.

    1997-07-01

    Traditionally, there has been a very high learning curve associated with using nuclear power plant (NPP) analysis codes. Even for seasoned plant analysts and engineers, the process of building or modifying an input model for present day NPP analysis codes is tedious, error prone, and time consuming. Current cost constraints and performance demands place an additional burden on today`s safety analysis community. Advances in graphical user interface (GUI) technology have been applied to obtain significant productivity and quality assurance improvements for the Transient Reactor Analysis Code (TRAC) input model development. KAPL Inc. has developed an X Windows-based graphical user interfacemore » named TOOKUIL which supports the design and analysis process, acting as a preprocessor, runtime editor, help system, and post processor for TRAC. This paper summarizes the objectives of the project, the GUI development process and experiences, and the resulting end product, TOOKUIL.« less

  19. Coding in Muscle Disease.

    PubMed

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  20. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less

  1. X-33 Aerodynamic and Aeroheating Computations for Wind Tunnel and Flight Conditions

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Thompson, Richard A.; Murphy, Kelly J.; Nowak, Robert J.; Riley, Christopher J.; Wood, William A.; Alter, Stephen J.; Prabhu, Ramadas K.

    1999-01-01

    This report provides an overview of hypersonic Computational Fluid Dynamics research conducted at the NASA Langley Research Center to support the Phase II development of the X-33 vehicle. The X-33, which is being developed by Lockheed-Martin in partnership with NASA, is an experimental Single-Stage-to-Orbit demonstrator that is intended to validate critical technologies for a full-scale Reusable Launch Vehicle. As part of the development of the X-33, CFD codes have been used to predict the aerodynamic and aeroheating characteristics of the vehicle. Laminar and turbulent predictions were generated for the X 33 vehicle using two finite- volume, Navier-Stokes solvers. Inviscid solutions were also generated with an Euler code. Computations were performed for Mach numbers of 4.0 to 10.0 at angles-of-attack from 10 deg to 48 deg with body flap deflections of 0, 10 and 20 deg. Comparisons between predictions and wind tunnel aerodynamic and aeroheating data are presented in this paper. Aeroheating and aerodynamic predictions for flight conditions are also presented.

  2. cncRNAs: Bi-functional RNAs with protein coding and non-coding functions

    PubMed Central

    Kumari, Pooja; Sampath, Karuna

    2015-01-01

    For many decades, the major function of mRNA was thought to be to provide protein-coding information embedded in the genome. The advent of high-throughput sequencing has led to the discovery of pervasive transcription of eukaryotic genomes and opened the world of RNA-mediated gene regulation. Many regulatory RNAs have been found to be incapable of protein coding and are hence termed as non-coding RNAs (ncRNAs). However, studies in recent years have shown that several previously annotated non-coding RNAs have the potential to encode proteins, and conversely, some coding RNAs have regulatory functions independent of the protein they encode. Such bi-functional RNAs, with both protein coding and non-coding functions, which we term as ‘cncRNAs’, have emerged as new players in cellular systems. Here, we describe the functions of some cncRNAs identified from bacteria to humans. Because the functions of many RNAs across genomes remains unclear, we propose that RNAs be classified as coding, non-coding or both only after careful analysis of their functions. PMID:26498036

  3. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Austregesilo, H.; Velkov, K.

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.

  4. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  5. Boundary modelling of the stellarator Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Renner, H.; Strumberger, E.; Kisslinger, J.; Nührenberg, J.; Wobig, H.

    1997-02-01

    To justify the design of the divertor plates in W7-X the magnetic fields of finite-β HELIAS equilibria for the so-called high-mirror case have been computed for various average β-values up to < β > = 0.04 with the NEMEC free-boundary equilibrium code [S.P. Hirshman, W.I. van Rij and W.I. Merkel, Comput. Phys. Commun. 43 (1986) 143] in combination with the newly developed MFBE (magnetic field solver for finite-beta equilibria) code. In a second study the unloading of the target plates by radiation was investigated. The B2 code [B.J. Braams, Ph.D. Thesis, Rijksuniversiteit Utrecht (1986)] was applied for the first time to stellarators to provide of a self-consistent modelling of the SOL including effects of neutrals and impurities.

  6. Hartman Testing of X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Biskasch, Michael; Zhang, William W.

    2013-01-01

    Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.

  7. Ensemble coding of face identity is not independent of the coding of individual identity.

    PubMed

    Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina

    2018-06-01

    Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.

  8. Laser x-ray Conversion and Electron Thermal Conductivity

    NASA Astrophysics Data System (ADS)

    Wang, Guang-yu; Chang, Tie-qiang

    2001-02-01

    The influence of electron thermal conductivity on the laser x-ray conversion in the coupling of 3ωo laser with Au plane target has been investigated by using a non-LTE radiation hydrodynamic code. The non-local electron thermal conductivity is introduced and compared with the other two kinds of the flux-limited Spitzer-Härm description. The results show that the non-local thermal conductivity causes the increase of the laser x-ray conversion efficiency and important changes of the plasma state and coupling feature.

  9. Table-top laser-driven ultrashort electron and X-ray source: the CIBER-X source project

    NASA Astrophysics Data System (ADS)

    Girardeau-Montaut, Jean-Pierre; Kiraly, Bélà; Girardeau-Montaut, Claire; Leboutet, Hubert

    2000-09-01

    We report on the development of a new laser-driven table-top ultrashort electron and X-ray source, also called the CIBER-X source . X-ray pulses are produced by a three-step process which consists of the photoelectron emission from a thin metallic photocathode illuminated by 16 ps duration laser pulses at 213 nm. The e-gun is a standard Pierce diode electrode type, in which electrons are accelerated by a cw electric field of ˜11 MV/m up to a hole made in the anode. The photoinjector produces a train of 70-80 keV electron pulses of ˜0.5 nC and 20 A peak current at a repetition rate of 10 Hz. The electrons are then transported outside the diode along a path of 20 cm length, and are focused onto a target of thullium by magnetic fields produced by two electromagnetic coils. X-rays are then produced by the impact of electrons on the target. Simulations of geometrical, electromagnetic fields and energetic characteristics of the complete source were performed previously with the assistance of the code PIXEL1 also developed at the laboratory. Finally, experimental electron and X-ray performances of the CIBER-X source as well as its application to very low dose imagery are presented and discussed. source Compacte d' Impulsions Brèves d' Electrons et de Rayons X

  10. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  11. Cracking the code: the accuracy of coding shoulder procedures and the repercussions.

    PubMed

    Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M

    2013-05-01

    Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p < 0.0001) and the correct procedure code (odds ratio 310.0, p < 0.0001). Using the proforma resulted in a £28,562 increase in revenue for the 100 patients evaluated relative to the income generated from the coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.

  12. The Role of Xist in X-Chromosome Dosage Compensation.

    PubMed

    Sahakyan, Anna; Yang, Yihao; Plath, Kathrin

    2018-06-14

    In each somatic cell of a female mammal one X chromosome is transcriptionally silenced via X-chromosome inactivation (XCI), initiating early in development. Although XCI events are conserved in mouse and human postimplantation development, regulation of X-chromosome dosage in preimplantation development occurs differently. In preimplantation development, mouse embryos undergo imprinted form of XCI, yet humans lack imprinted XCI and instead regulate gene expression of both X chromosomes by dampening transcription. The long non-coding RNA Xist/XIST is expressed in mouse and human preimplantation and postimplantation development to orchestrate XCI, but its role in dampening is unclear. In this review, we discuss recent advances in our understanding of the role of Xist in X chromosome dosage compensation in mouse and human. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Electronic and mechanical properties of ZnX (X = S, Se and Te)—An ab initio study

    NASA Astrophysics Data System (ADS)

    Verma, Ajay Singh; Sharma, Sheetal; Sarkar, Bimal Kumar; Jindal, Vijay Kumar

    2011-12-01

    Zinc chalcogenides (ZnX, X = S, Se and Te) have been increasing attention as wide and direct band gap semiconductor for blue and ultraviolet optical devices. This paper analyzes electronic and mechanical properties of these materials by ab initio pseudo-potential method that uses non conserving pseudopotentials in fully nonlocal form, as implemented in SIESTA code. In this approach the local density approximation (LDA) is used for the exchange-correlation (XC) potential. The calculations are given for band gap, elastic constants (C11, C12 and C44), shear modulus, and Young's modulus. The results are in very good agreement with previous theoretical calculations and available experimental data.

  14. Trace-shortened Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Solomon, G.

    1994-01-01

    Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.

  15. Variation of SNOMED CT coding of clinical research concepts among coding experts.

    PubMed

    Andrews, James E; Richesson, Rachel L; Krischer, Jeffrey

    2007-01-01

    To compare consistency of coding among professional SNOMED CT coders representing three commercial providers of coding services when coding clinical research concepts with SNOMED CT. A sample of clinical research questions from case report forms (CRFs) generated by the NIH-funded Rare Disease Clinical Research Network (RDCRN) were sent to three coding companies with instructions to code the core concepts using SNOMED CT. The sample consisted of 319 question/answer pairs from 15 separate studies. The companies were asked to select SNOMED CT concepts (in any form, including post-coordinated) that capture the core concept(s) reflected in the question. Also, they were asked to state their level of certainty, as well as how precise they felt their coding was. Basic frequencies were calculated to determine raw level agreement among the companies and other descriptive information. Krippendorff's alpha was used to determine a statistical measure of agreement among the coding companies for several measures (semantic, certainty, and precision). No significant level of agreement among the experts was found. There is little semantic agreement in coding of clinical research data items across coders from 3 professional coding services, even using a very liberal definition of agreement.

  16. Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns

    NASA Technical Reports Server (NTRS)

    Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.

    2006-01-01

    Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.

  17. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  18. Background simulations of the wide-field coded-mask camera for X-/Gamma-ray of the French-Chinese mission SVOM

    NASA Astrophysics Data System (ADS)

    Godet, Olivier; Barret, Didier; Paul, Jacques; Sizun, Patrick; Mandrou, Pierre; Cordier, Bertrand

    SVOM (Space Variable Object Monitor) is a French-Chinese mission dedicated to the study of high-redshift GRBs, which is expected to be launched in 2012. The anti-Sun pointing strategy of SVOM along with a strong and integrated ground segment consisting of two wide-field robotic telescopes covering the near-IR and optical will optimise the ground-based GRB follow-ups by the largest telescopes and thus the measurements of spectroscopic redshifts. The central instrument of the science payload will be an innovative wide-field coded-mask camera for X- /Gamma-rays (4-250 keV) responsible for triggering and localising GRBs with an accuracy better than 10 arc-minutes. Such an instrument will be background-dominated so it is essential to estimate the background level expected once in orbit during the early phase of the instrument design in order to ensure good science performance. We present our Monte-Carlo simulator enabling us to compute the background spectrum taking into account the mass model of the camera and the main components of the space environment encountered in orbit by the satellite. From that computation, we show that the current design of the camera CXG will be more sensitive to high-redshift GRBs than the Swift-BAT thanks to its low-energy threshold of 4 keV.

  19. Coding of Neuroinfectious Diseases.

    PubMed

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  20. Diagnostic Coding for Epilepsy.

    PubMed

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  1. Bragg x-ray survey spectrometer for ITER.

    PubMed

    Varshney, S K; Barnsley, R; O'Mullane, M G; Jakhar, S

    2012-10-01

    Several potential impurity ions in the ITER plasmas will lead to loss of confined energy through line and continuum emission. For real time monitoring of impurities, a seven channel Bragg x-ray spectrometer (XRCS survey) is considered. This paper presents design and analysis of the spectrometer, including x-ray tracing by the Shadow-XOP code, sensitivity calculations for reference H-mode plasma and neutronics assessment. The XRCS survey performance analysis shows that the ITER measurement requirements of impurity monitoring in 10 ms integration time at the minimum levels for low-Z to high-Z impurity ions can largely be met.

  2. JLIFE: THE JEFFERSON LAB INTERACTIVE FRONT END FOR THE OPTICAL PROPAGATION CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Anne M.; Shinn, Michelle D.

    2013-08-01

    We present details on a graphical interface for the open source software program Optical Propagation Code, or OPC. This interface, written in Java, allows a user with no knowledge of OPC to create an optical system, with lenses, mirrors, apertures, etc. and the appropriate drifts between them. The Java code creates the appropriate Perl script that serves as the input for OPC. The mode profile is then output at each optical element. The display can be either an intensity profile along the x axis, or as an isometric 3D plot which can be tilted and rotated. These profiles can bemore » saved. Examples of the input and output will be presented.« less

  3. Report number codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, R.N.

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in thismore » publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.« less

  4. Speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfullymore » regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that

  5. The X-Ray Polarization of the Accretion Disk Coronae of Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Beheshtipour, Banafsheh; Krawczynski, Henric; Malzac, Julien

    2017-11-01

    Hard X-rays observed in Active Galactic Nuclei (AGNs) are thought to originate from the Comptonization of the optical/UV accretion disk photons in a hot corona. Polarization studies of these photons can help to constrain the corona geometry and the plasma properties. We have developed a ray-tracing code that simulates the Comptonization of accretion disk photons in coronae of arbitrary shapes, and use it here to study the polarization of the X-ray emission from wedge and spherical coronae. We study the predicted polarization signatures for the fully relativistic and various approximate treatments of the elemental Compton scattering processes. We furthermore use the code to evaluate the impact of nonthermal electrons and cyclo-synchrotron photons on the polarization properties. Finally, we model the NuSTAR observations of the Seyfert I galaxy Mrk 335 and predict the associated polarization signal. Our studies show that X-ray polarimetry missions such as NASA’s Imaging X-ray Polarimetry Explorer and the X-ray Imaging Polarimetry Explorer proposed to ESA will provide valuable new information about the physical properties of the plasma close to the event horizon of AGN black holes.

  6. Extension of the XGC code for global gyrokinetic simulations in stellarator geometry

    NASA Astrophysics Data System (ADS)

    Cole, Michael; Moritaka, Toseo; White, Roscoe; Hager, Robert; Ku, Seung-Hoe; Chang, Choong-Seock

    2017-10-01

    In this work, the total-f, gyrokinetic particle-in-cell code XGC is extended to treat stellarator geometries. Improvements to meshing tools and the code itself have enabled the first physics studies, including single particle tracing and flux surface mapping in the magnetic geometry of the heliotron LHD and quasi-isodynamic stellarator Wendelstein 7-X. These have provided the first successful test cases for our approach. XGC is uniquely placed to model the complex edge physics of stellarators. A roadmap to such a global confinement modeling capability will be presented. Single particle studies will include the physics of energetic particles' global stochastic motions and their effect on confinement. Good confinement of energetic particles is vital for a successful stellarator reactor design. These results can be compared in the core region with those of other codes, such as ORBIT3d. In subsequent work, neoclassical transport and turbulence can then be considered and compared to results from codes such as EUTERPE and GENE. After sufficient verification in the core region, XGC will move into the stellarator edge region including the material wall and neutral particle recycling.

  7. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  8. hybrid\\scriptsize{{MANTIS}}: a CPU-GPU Monte Carlo method for modeling indirect x-ray detectors with columnar scintillators

    NASA Astrophysics Data System (ADS)

    Sharma, Diksha; Badal, Andreu; Badano, Aldo

    2012-04-01

    The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result

  9. User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.

    1982-01-01

    This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.

  10. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  11. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  12. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  13. The donor star of the X-ray pulsar X1908+075

    NASA Astrophysics Data System (ADS)

    Martínez-Núñez, S.; Sander, A.; Gímenez-García, A.; Gónzalez-Galán, A.; Torrejón, J. M.; Gónzalez-Fernández, C.; Hamann, W.-R.

    2015-06-01

    High-mass X-ray binaries consist of a massive donor star and a compact object. While several of those systems have been well studied in X-rays, little is known for most of the donor stars as they are often heavily obscured in the optical and ultraviolet regime. There is an opportunity to observe them at infrared wavelengths, however. The goal of this study is to obtain the stellar and wind parameters of the donor star in the X1908+075 high-mass X-ray binary system with a stellar atmosphere model to check whether previous studies from X-ray observations and spectral morphology lead to a sufficient description of the donor star. We obtained H- and K-band spectra of X1908+075 and analysed them with the Potsdam Wolf-Rayet (PoWR) model atmosphere code. For the first time, we calculated a stellar atmosphere model for the donor star, whose main parameters are: Mspec = 15 ± 6 M⊙, T∗ = 23-3+6 kK, log geff = 3.0 ± 0.2 and log L/L⊙ = 4.81 ± 0.25. The obtained parameters point towards an early B-type (B0-B3) star, probably in a supergiant phase. Moreover we determined a more accurate distance to the system of 4.85 ± 0.50 kpc than the previously reported value. Based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.Appendix A is available in electronic form at http://www.aanda.org

  14. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  15. PheProb: probabilistic phenotyping using diagnosis codes to improve power for genetic association studies.

    PubMed

    Sinnott, Jennifer A; Cai, Fiona; Yu, Sheng; Hejblum, Boris P; Hong, Chuan; Kohane, Isaac S; Liao, Katherine P

    2018-05-17

    Standard approaches for large scale phenotypic screens using electronic health record (EHR) data apply thresholds, such as ≥2 diagnosis codes, to define subjects as having a phenotype. However, the variation in the accuracy of diagnosis codes can impair the power of such screens. Our objective was to develop and evaluate an approach which converts diagnosis codes into a probability of a phenotype (PheProb). We hypothesized that this alternate approach for defining phenotypes would improve power for genetic association studies. The PheProb approach employs unsupervised clustering to separate patients into 2 groups based on diagnosis codes. Subjects are assigned a probability of having the phenotype based on the number of diagnosis codes. This approach was developed using simulated EHR data and tested in a real world EHR cohort. In the latter, we tested the association between low density lipoprotein cholesterol (LDL-C) genetic risk alleles known for association with hyperlipidemia and hyperlipidemia codes (ICD-9 272.x). PheProb and thresholding approaches were compared. Among n = 1462 subjects in the real world EHR cohort, the threshold-based p-values for association between the genetic risk score (GRS) and hyperlipidemia were 0.126 (≥1 code), 0.123 (≥2 codes), and 0.142 (≥3 codes). The PheProb approach produced the expected significant association between the GRS and hyperlipidemia: p = .001. PheProb improves statistical power for association studies relative to standard thresholding approaches by leveraging information about the phenotype in the billing code counts. The PheProb approach has direct applications where efficient approaches are required, such as in Phenome-Wide Association Studies.

  16. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  17. X-band RF gun and linac for medical Compton scattering X-ray source

    NASA Astrophysics Data System (ADS)

    Dobashi, Katsuhito; Uesaka, Mitsuru; Fukasawa, Atsushi; Sakamoto, Fumito; Ebina, Futaro; Ogino, Haruyuki; Urakawa, Junji; Higo, Toshiyasu; Akemoto, Mitsuo; Hayano, Hitoshi; Nakagawa, Keiichi

    2004-12-01

    Compton scattering hard X-ray source for 10-80 keV are under construction using the X-band (11.424 GHz) electron linear accelerator and YAG laser at Nuclear Engineering Research laboratory, University of Tokyo. This work is a part of the national project on the development of advanced compact medical accelerators in Japan. National Institute for Radiological Science is the host institute and U.Tokyo and KEK are working for the X-ray source. Main advantage is to produce tunable monochromatic hard (10-80 keV) X-rays with the intensities of 108-1010 photons/s (at several stages) and the table-top size. Second important aspect is to reduce noise radiation at a beam dump by adopting the deceleration of electrons after the Compton scattering. This realizes one beamline of a 3rd generation SR source at small facilities without heavy shielding. The final goal is that the linac and laser are installed on the moving gantry. We have designed the X-band (11.424 GHz) traveling-wave-type linac for the purpose. Numerical consideration by CAIN code and luminosity calculation are performed to estimate the X-ray yield. X-band thermionic-cathode RF-gun and RDS(Round Detuned Structure)-type X-band accelerating structure are applied to generate 50 MeV electron beam with 20 pC microbunches (104) for 1 microsecond RF macro-pulse. The X-ray yield by the electron beam and Q-switch Nd:YAG laser of 2 J/10 ns is 107 photons/RF-pulse (108 photons/sec at 10 pps). We design to adopt a technique of laser circulation to increase the X-ray yield up to 109 photons/pulse (1010 photons/s). 50 MW X-band klystron and compact modulator have been constructed and now under tuning. The construction of the whole system has started. X-ray generation and medical application will be performed in the early next year.

  18. Poster - 28: Shielding of X-ray Rooms in Ontario in the Absence of Best Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frimeth, Jeff; Richer, Jeff; Nesbitt, James

    This poster will be strictly based on the Healing Arts Radiation Protection (HARP) Act, Regulation 543 under this Act (X-ray Safety Code), and personal communication the presenting author has had. In Ontario, the process of approval of an X-ray machine installation by the Director of the X-ray Inspection Service (XRIS) follows a certain protocol. Initially, the applicant submits a series of forms, including recommended shielding amounts, in order to satisfy the law. This documentation is then transferred to a third-party vendor (i.e. a professional engineer – P.Eng.) outsourced by the Ministry of Health and Long-term Care (MOHLTC). The P.Eng. thenmore » evaluates the submitted documentation for appropriate fulfillment of the HARP Act and Reg. 543 requirements. If the P.Eng.’s evaluation of the documentation is to their satisfaction, the XRIS is then notified. Finally, the Director will then issue a letter of approval to install the equipment at the facility. The methodology required to be used by the P.Eng. in order to determine the required amounts of protective barriers, and recommended to be used by the applicant, is contained within Safety Code 20A. However, Safety Code 35 has replaced the obsolete Safety Code 20A document and employs best practices in shielding design. This talk will focus further on specific intentions and limitations of Safety Code 20A. Furthermore, this talk will discuss the definition of the “practice of professional engineering” in Ontario. COMP members who are involved in shielding design are strongly encouraged to attend.« less

  19. A Review on Spectral Amplitude Coding Optical Code Division Multiple Access

    NASA Astrophysics Data System (ADS)

    Kaur, Navpreet; Goyal, Rakesh; Rani, Monika

    2017-06-01

    This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.

  20. Some issues and subtleties in numerical simulation of X-ray FEL's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less

  1. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  2. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  3. Characterization of laser-cut copper foil X-pinches

    NASA Astrophysics Data System (ADS)

    Collins, G. W.; Valenzuela, J. C.; Hansen, S. B.; Wei, M. S.; Reed, C. T.; Forsman, A. C.; Beg, F. N.

    2016-10-01

    Quantitative data analyses of laser-cut Cu foil X-pinch experiments on the 150 ns quarter-period, ˜250 kA GenASIS driver are presented. Three different foil designs are tested to determine the effects of initial structure on pinch outcome. Foil X-pinch data are also presented alongside the results from wire X-pinches with comparable mass. The X-ray flux and temporal profile of the emission from foil X-pinches differed significantly from that of wire X-pinches, with all emission from the foil X-pinches confined to a ˜3 ns period as opposed to the delayed, long-lasting electron beam emission common in wire X-pinches. Spectroscopic data show K-shell as well as significant L-shell emission from both foil and wire X-pinches. Fits to synthetic spectra using the SCRAM code suggest that pinching foil X's produced a ˜1 keV, ne ≥ 1023 cm-3 plasma. The spectral data combined with the improved reliability of the source timing, flux, and location indicate that foil X-pinches generate a reproducible, K-shell point-projection radiography source that can be easily modified and tailored to suit backlighting needs across a variety of applications.

  4. Manually operated coded switch

    DOEpatents

    Barnette, Jon H.

    1978-01-01

    The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.

  5. Phonological coding during reading.

    PubMed

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  6. X-ray simulation algorithms used in ISP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, John P.

    ISP is a simulation code which is sometimes used in the USNDS program. ISP is maintained by Sandia National Lab. However, the X-ray simulation algorithm used by ISP was written by scientists at LANL – mainly by Ed Fenimore with some contributions from John Sullivan and George Neuschaefer and probably others. In email to John Sullivan on July 25, 2016, Jill Rivera, ISP project lead, said “ISP uses the function xdosemeters_sim from the xgen library.” The is a fortran subroutine which is also used to simulate the X-ray response in consim (a descendant of xgen). Therefore, no separate documentation ofmore » the X-ray simulation algorithms in ISP have been written – the documentation for the consim simulation can be used.« less

  7. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode

  8. Composing Data Parallel Code for a SPARQL Graph Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basicmore » graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.« less

  9. Bar Code Labels

    NASA Technical Reports Server (NTRS)

    1988-01-01

    American Bar Codes, Inc. developed special bar code labels for inventory control of space shuttle parts and other space system components. ABC labels are made in a company-developed anodizing aluminum process and consecutively marketed with bar code symbology and human readable numbers. They offer extreme abrasion resistance and indefinite resistance to ultraviolet radiation, capable of withstanding 700 degree temperatures without deterioration and up to 1400 degrees with special designs. They offer high resistance to salt spray, cleaning fluids and mild acids. ABC is now producing these bar code labels commercially or industrial customers who also need labels to resist harsh environments.

  10. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  11. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  12. MT71x: Multi-Temperature Library Based on ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlin, Jeremy Lloyd; Parsons, Donald Kent; Gray, Mark Girard

    The Nuclear Data Team has released a multitemperature transport library, MT71x, based upon ENDF/B-VII.1 with a few modifications as well as additional evaluations for a total of 427 isotope tables. The library was processed using NJOY2012.39 into 23 temperatures. MT71x consists of two sub-libraries; MT71xMG for multigroup energy representation data and MT71xCE for continuous energy representation data. These sub-libraries are suitable for deterministic transport and Monte Carlo transport applications, respectively. The SZAs used are the same for the two sub-libraries; that is, the same SZA can be used for both libraries. This makes comparisons between the two libraries and betweenmore » deterministic and Monte Carlo codes straightforward. Both the multigroup energy and continuous energy libraries were verified and validated with our checking codes checkmg and checkace (multigroup and continuous energy, respectively) Then an expanded suite of tests was used for additional verification and, finally, verified using an extensive suite of critical benchmark models. We feel that this library is suitable for all calculations and is particularly useful for calculations sensitive to temperature effects.« less

  13. Subspace-Aware Index Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.

    In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less

  14. Subspace-Aware Index Codes

    DOE PAGES

    Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.

    2017-04-12

    In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less

  15. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  16. Estimations of Mo X-pinch plasma parameters on QiangGuang-1 facility by L-shell spectral analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jian; Qiu, Aici; State Key Laboratory of Intense Pulsed Radiation Simulation and Effect, Northwest Institute of Nuclear Technology, Xi'an 710024

    2013-08-15

    Plasma parameters of molybdenum (Mo) X-pinches on the 1-MA QiangGuang-1 facility were estimated by L-shell spectral analysis. X-ray radiation from X-pinches had a pulsed width of 1 ns, and its spectra in 2–3 keV were measured with a time-integrated X-ray spectrometer. Relative intensities of spectral features were derived by correcting for the spectral sensitivity of the spectrometer. With an open source, atomic code FAC (flexible atomic code), ion structures, and various atomic radiative-collisional rates for O-, F-, Ne-, Na-, Mg-, and Al-like ionization stages were calculated, and synthetic spectra were constructed at given plasma parameters. By fitting the measured spectramore » with the modeled, Mo X-pinch plasmas on the QiangGuang-1 facility had an electron density of about 10{sup 21} cm{sup −3} and the electron temperature of about 1.2 keV.« less

  17. Identification of coding and non-coding mutational hotspots in cancer genomes.

    PubMed

    Piraino, Scott W; Furney, Simon J

    2017-01-05

    The identification of mutations that play a causal role in tumour development, so called "driver" mutations, is of critical importance for understanding how cancers form and how they might be treated. Several large cancer sequencing projects have identified genes that are recurrently mutated in cancer patients, suggesting a role in tumourigenesis. While the landscape of coding drivers has been extensively studied and many of the most prominent driver genes are well characterised, comparatively less is known about the role of mutations in the non-coding regions of the genome in cancer development. The continuing fall in genome sequencing costs has resulted in a concomitant increase in the number of cancer whole genome sequences being produced, facilitating systematic interrogation of both the coding and non-coding regions of cancer genomes. To examine the mutational landscapes of tumour genomes we have developed a novel method to identify mutational hotspots in tumour genomes using both mutational data and information on evolutionary conservation. We have applied our methodology to over 1300 whole cancer genomes and show that it identifies prominent coding and non-coding regions that are known or highly suspected to play a role in cancer. Importantly, we applied our method to the entire genome, rather than relying on predefined annotations (e.g. promoter regions) and we highlight recurrently mutated regions that may have resulted from increased exposure to mutational processes rather than selection, some of which have been identified previously as targets of selection. Finally, we implicate several pan-cancer and cancer-specific candidate non-coding regions, which could be involved in tumourigenesis. We have developed a framework to identify mutational hotspots in cancer genomes, which is applicable to the entire genome. This framework identifies known and novel coding and non-coding mutional hotspots and can be used to differentiate candidate driver regions from

  18. Hydrodynamic evolution of plasma waveguides for soft-x-ray amplifiers

    NASA Astrophysics Data System (ADS)

    Oliva, Eduardo; Depresseux, Adrien; Cotelo, Manuel; Lifschitz, Agustín; Tissandier, Fabien; Gautier, Julien; Maynard, Gilles; Velarde, Pedro; Sebban, Stéphane

    2018-02-01

    High-density, collisionally pumped plasma-based soft-x-ray lasers have recently delivered hundreds of femtosecond pulses, breaking the longstanding barrier of one picosecond. To pump these amplifiers an intense infrared pulse must propagate focused throughout all the length of the amplifier, which spans several Rayleigh lengths. However, strong nonlinear effects hinder the propagation of the laser beam. The use of a plasma waveguide allows us to overcome these drawbacks provided the hydrodynamic processes that dominate the creation and posterior evolution of the waveguide are controlled and optimized. In this paper we present experimental measurements of the radial density profile and transmittance of such waveguide, and we compare them with numerical calculations using hydrodynamic and particle-in-cell codes. Controlling the properties (electron density value and radial gradient) of the waveguide with the help of numerical codes promises the delivery of ultrashort (tens of femtoseconds), coherent soft-x-ray pulses.

  19. Fragile X spectrum disorders.

    PubMed

    Lozano, Reymundo; Rosero, Carolina Alba; Hagerman, Randi J

    2014-11-01

    The fragile X mental retardation 1 gene (FMR1), which codes for the fragile X mental retardation 1 protein (FMRP), is located at Xp27.3. The normal allele of the FMR1 gene typically has 5 to 40 CGG repeats in the 5' untranslated region; abnormal alleles of dynamic mutations include the full mutation (> 200 CGG repeats), premutation (55-200 CGG repeats) and the gray zone mutation (45-54 CGG repeats). Premutation carriers are common in the general population with approximately 1 in 130-250 females and 1 in 250-810 males, whereas the full mutation and Fragile X syndrome (FXS) occur in approximately 1 in 4000 to 1 in 7000. FMR1 mutations account for a variety of phenotypes including the most common monogenetic cause of inherited intellectual disability (ID) and autism (FXS), the most common genetic form of ovarian failure, the fragile X-associated primary ovarian insufficiency (FXPOI, premutation); and fragile X-associated tremor/ataxia syndrome (FXTAS, premutation). The premutation can also cause developmental problems including ASD and ADHD especially in boys and psychopathology including anxiety and depression in children and adults. Some premutation carriers can have a deficit of FMRP and some unmethylated full mutation individuals can have elevated FMR1 mRNA that is considered a premutation problem. Therefore the term "Fragile X Spectrum Disorder" (FXSD) should be used to include the wide range of overlapping phenotypes observed in affected individuals with FMR1 mutations. In this review we focus on the phenotypes and genotypes of children with FXSD.

  20. Fragile X spectrum disorders

    PubMed Central

    Lozano, Reymundo; Rosero, Carolina Alba; Hagerman, Randi J

    2014-01-01

    Summary The fragile X mental retardation 1 gene (FMR1), which codes for the fragile X mental retardation 1 protein (FMRP), is located at Xp27.3. The normal allele of the FMR1 gene typically has 5 to 40 CGG repeats in the 5′ untranslated region; abnormal alleles of dynamic mutations include the full mutation (> 200 CGG repeats), premutation (55–200 CGG repeats) and the gray zone mutation (45–54 CGG repeats). Premutation carriers are common in the general population with approximately 1 in 130–250 females and 1 in 250–810 males, whereas the full mutation and Fragile X syndrome (FXS) occur in approximately 1 in 4000 to 1 in 7000. FMR1 mutations account for a variety of phenotypes including the most common monogenetic cause of inherited intellectual disability (ID) and autism (FXS), the most common genetic form of ovarian failure, the fragile X-associated primary ovarian insufficiency (FXPOI, premutation); and fragile X-associated tremor/ataxia syndrome (FXTAS, premutation). The premutation can also cause developmental problems including ASD and ADHD especially in boys and psychopathology including anxiety and depression in children and adults. Some premutation carriers can have a deficit of FMRP and some unmethylated full mutation individuals can have elevated FMR1 mRNA that is considered a premutation problem. Therefore the term “Fragile X Spectrum Disorder” (FXSD) should be used to include the wide range of overlapping phenotypes observed in affected individuals with FMR1 mutations. In this review we focus on the phenotypes and genotypes of children with FXSD. PMID:25606363

  1. Atomic Processes in X-ray Photoioinzed Gas

    NASA Technical Reports Server (NTRS)

    Kallman, Timothy

    2005-01-01

    It has long been known that photoionization and photoabsorption play a dominant role in determining the state of gas in nebulae surrounding hot stars and in active galaxies. Recent observations of X-ray spectra demonstrate that these processes are also dominant in highly ionized gas near compact objects, and also affect the transmission of X-rays from the majority of astronomical sources. This has led to new insights into the understanding of what is going on in these sources. It has also pointed out the need for accurate atomic cross sections for photoionization and absorption, notably for processes involving inner shells. The xstar code can be used for calculating the heating, ionization and reprocessing of X-rays by gas in a range of ionization states and temperatures. It has recently been updated to include an improved treatment of inner shell transitions in iron. I will review the capabilities of xstar, the atomic data, and illustrate some applications to recent X-ray spectral observations.

  2. Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System

    NASA Technical Reports Server (NTRS)

    Taft, James R.

    2000-01-01

    The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full

  3. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  4. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  5. X-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Wu, S. T.

    2000-01-01

    Dr. S. N. Zhang has lead a seven member group (Dr. Yuxin Feng, Mr. XuejunSun, Mr. Yongzhong Chen, Mr. Jun Lin, Mr. Yangsen Yao, and Ms. Xiaoling Zhang). This group has carried out the following activities: continued data analysis from space astrophysical missions CGRO, RXTE, ASCA and Chandra. Significant scientific results have been produced as results of their work. They discovered the three-layered accretion disk structure around black holes in X-ray binaries; their paper on this discovery is to appear in the prestigious Science magazine. They have also developed a new method for energy spectral analysis of black hole X-ray binaries; four papers on this topics were presented at the most recent Atlanta AAS meeting. They have also carried Monte-Carlo simulations of X-ray detectors, in support to the hardware development efforts at Marshall Space Flight Center (MSFC). These computation-intensive simulations have been carried out entirely on the computers at UAH. They have also carried out extensive simulations for astrophysical applications, taking advantage of the Monte-Carlo simulation codes developed previously at MSFC and further improved at UAH for detector simulations. One refereed paper and one contribution to conference proceedings have been resulted from this effort.

  6. Theory of epigenetic coding.

    PubMed

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  7. DSN telemetry system performance with convolutionally coded data using operational maximum-likelihood convolutional decoders

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.

    1976-01-01

    The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.

  8. Bandwidth efficient coding for satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.

    1992-01-01

    An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.

  9. Modelling of an Orthovoltage X-ray Therapy Unit with the EGSnrc Monte Carlo Package

    NASA Astrophysics Data System (ADS)

    Knöös, Tommy; Rosenschöld, Per Munck Af; Wieslander, Elinore

    2007-06-01

    Simulations with the EGSnrc code package of an orthovoltage x-ray machine have been performed. The BEAMnrc code was used to transport electrons, produce x-ray photons in the target and transport of these through the treatment machine down to the exit level of the applicator. Further transport in water or CT based phantoms was facilitated by the DOSXYZnrc code. Phase space files were scored with BEAMnrc and analysed regarding the energy spectra at the end of the applicator. Tuning of simulation parameters was based on the half-value layer quantity for the beams in either Al or Cu. Calculated depth dose and profile curves have been compared against measurements and show good agreement except at shallow depths. The MC model tested in this study can be used for various dosimetric studies as well as generating a library of typical treatment cases that can serve as both educational material and guidance in the clinical practice

  10. Simulation of the Simbol-X telescope: imaging performance of a deformable x-ray telescope

    NASA Astrophysics Data System (ADS)

    Chauvin, Maxime; Roques, Jean-Pierre

    2009-08-01

    We have developed a simulation tool for a Wolter I telescope subject to deformations. The aim is to understand and predict the behavior of Simbol-X and other future missions (NuSTAR, Astro-H, IXO, ...). Our code, based on Monte-Carlo ray-tracing, computes the full photon trajectories up to the detector plane, along with the deformations. The degradation of the imaging system is corrected using metrology. This tool allows to perform many analyzes in order to optimize the configuration of any of these telescopes.

  11. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  12. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  13. Coding and decoding for code division multiple user communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  14. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model.more » The code is guilt atop the Python interpreter language.« less

  15. Coherent diffractive imaging using randomly coded masks

    DOE PAGES

    Seaberg, Matthew H.; d'Aspremont, Alexandre; Turner, Joshua J.

    2015-12-07

    We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. As a result, the experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-raymore » synchrotron and even free electron laser experiments.« less

  16. Coherent diffractive imaging using randomly coded masks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seaberg, Matthew H., E-mail: seaberg@slac.stanford.edu; Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025; D'Aspremont, Alexandre

    2015-12-07

    We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even freemore » electron laser experiments.« less

  17. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  18. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  19. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  20. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  1. Lattice dynamics of Ru2FeX (X = Si, Ge) Full Heusler alloys

    NASA Astrophysics Data System (ADS)

    Rizwan, M.; Afaq, A.; Aneeza, A.

    2018-05-01

    In present work, the lattice dynamics of Ru2FeX (X = Si, Ge) full Heusler alloys are investigated using density functional theory (DFT) within generalized gradient approximation (GGA) in a plane wave basis, with norm-conserving pseudopotentials. Phonon dispersion curves and phonon density of states are obtained using first-principles linear response approach of density functional perturbation theory (DFPT) as implemented in Quantum ESPRESSO code. Phonon dispersion curves indicates for both Heusler alloys that there is no imaginary phonon in whole Brillouin zone, confirming dynamical stability of these alloys in L21 type structure. There is a considerable overlapping between acoustic and optical phonon modes predicting no phonon band gap exists in dispersion curves of alloys. The same result is shown by phonon density of states curves for both Heusler alloys. Reststrahlen band for Ru2FeSi is found smaller than Ru2FeGe.

  2. Status of the Simbol-X Background Simulation Activities

    NASA Astrophysics Data System (ADS)

    Tenzer, C.; Briel, U.; Bulgarelli, A.; Chipaux, R.; Claret, A.; Cusumano, G.; Dell'Orto, E.; Fioretti, V.; Foschini, L.; Hauf, S.; Kendziorra, E.; Kuster, M.; Laurent, P.; Tiengo, A.

    2009-05-01

    The Simbol-X background simulation group is working towards a simulation based background and mass model which can be used before and during the mission. Using the Geant4 toolkit, a Monte-Carlo code to simulate the detector background of the Simbol-X focal plane instrument has been developed with the aim to optimize the design of the instrument. Achieving an overall low instrument background has direct impact on the sensitivity of Simbol-X and thus will be crucial for the success of the mission. We present results of recent simulation studies concerning the shielding of the detectors with respect to the diffuse cosmic hard X-ray background and to the cosmic-ray proton induced background. Besides estimates of the level and spectral shape of the remaining background expected in the low and high energy detector, also anti-coincidence rates and resulting detector dead time predictions are discussed.

  3. X-Ray Spectra from MHD Simulations of Accreting Black Holes

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy D.; Noble, Scott C.; Krolik, Julian H.

    2011-01-01

    We present new global calculations of X-ray spectra from fully relativistic magneto-hydrodynamic (MHO) simulations of black hole (BH) accretion disks. With a self consistent radiative transfer code including Compton scattering and returning radiation, we can reproduce the predominant spectral features seen in decades of X-ray observations of stellar-mass BHs: a broad thermal peak around 1 keV, power-law continuum up to >100 keV, and a relativistically broadened iron fluorescent line. By varying the mass accretion rate, different spectral states naturally emerge: thermal-dominant, steep power-law, and low/hard. In addition to the spectral features, we briefly discuss applications to X-ray timing and polarization.

  4. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  5. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  6. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  7. Code Team Training: Demonstrating Adherence to AHA Guidelines During Pediatric Code Blue Activations.

    PubMed

    Stewart, Claire; Shoemaker, Jamie; Keller-Smith, Rachel; Edmunds, Katherine; Davis, Andrew; Tegtmeyer, Ken

    2017-10-16

    Pediatric code blue activations are infrequent events with a high mortality rate despite the best effort of code teams. The best method for training these code teams is debatable; however, it is clear that training is needed to assure adherence to American Heart Association (AHA) Resuscitation Guidelines and to prevent the decay that invariably occurs after Pediatric Advanced Life Support training. The objectives of this project were to train a multidisciplinary, multidepartmental code team and to measure this team's adherence to AHA guidelines during code simulation. Multidisciplinary code team training sessions were held using high-fidelity, in situ simulation. Sessions were held several times per month. Each session was filmed and reviewed for adherence to 5 AHA guidelines: chest compression rate, ventilation rate, chest compression fraction, use of a backboard, and use of a team leader. After the first study period, modifications were made to the code team including implementation of just-in-time training and alteration of the compression team. Thirty-eight sessions were completed, with 31 eligible for video analysis. During the first study period, 1 session adhered to all AHA guidelines. During the second study period, after alteration of the code team and implementation of just-in-time training, no sessions adhered to all AHA guidelines; however, there was an improvement in percentage of sessions adhering to ventilation rate and chest compression rate and an improvement in median ventilation rate. We present a method for training a large code team drawn from multiple hospital departments and a method of assessing code team performance. Despite subjective improvement in code team positioning, communication, and role completion and some improvement in ventilation rate and chest compression rate, we failed to consistently demonstrate improvement in adherence to all guidelines.

  8. 950 keV X-Band Linac For Material Recognition Using Two-Fold Scintillator Detector As A Concept Of Dual-Energy X-Ray System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kiwoo; Natsui, Takuya; Hirai, Shunsuke

    2011-06-01

    One of the advantages of applying X-band linear accelerator (Linac) is the compact size of the whole system. That shows us the possibility of on-site system such as the custom inspection system in an airport. As X-ray source, we have developed X-band Linac and achieved maximum X-ray energy 950 keV using the low power magnetron (250 kW) in 2 {mu}s pulse length. The whole size of the Linac system is 1x1x1 m{sup 3}. That is realized by introducing X-band system. In addition, we have designed two-fold scintillator detector in dual energy X-ray concept. Monte carlo N-particle transport (MCNP) code wasmore » used to make up sensor part of the design with two scintillators, CsI and CdWO4. The custom inspection system is composed of two equipments: 950 keV X-band Linac and two-fold scintillator and they are operated simulating real situation such as baggage check in an airport. We will show you the results of experiment which was performed with metal samples: iron and lead as targets in several conditions.« less

  9. Simulation of the Simbol-X Telescope

    NASA Astrophysics Data System (ADS)

    Chauvin, M.; Roques, J. P.

    2009-05-01

    We have developed a simulation tool for a Wolter I telescope operating in formation flight. The aim is to understand and predict the behavior of the Simbol-X instrument. As the geometry is variable, formation flight introduces new challenges and complex implications. Our code, based on Monte Carlo ray tracing, computes the full photon trajectories up to the detector plane, along with the relative drifts of the two spacecrafts. It takes into account angle and energy dependent interactions of the photons with the mirrors and applies to any grazing incidence telescope. The resulting images of simulated sources from 0.1 keV to 100 keV allow us to optimize the configuration of the instrument and to assess the performance of the Simbol-X telescope.

  10. Medieval Music. Alfonso X & the Cantigas de Santa Maria.

    ERIC Educational Resources Information Center

    McRae, Lee

    This lesson introduces students to music in the Court of Alfonso X The Learned, Spanish king from 1252-1284. The readings provide information about King Alfonso, his political ambitions, and his contributions to Spanish medieval history. The lesson also introduces his establishment of laws with new legal codes and his remarkable collection of…

  11. xRRM

    PubMed Central

    Singh, Mahavir; Choi, Charles P.; Feigon, Juli

    2013-01-01

    Genuine La and La-related proteins group 7 (LARP7) bind to the non-coding RNAs transcribed by RNA polymerase III (RNAPIII), which end in UUU-3′OH. The La motif and RRM1 of these proteins (the La module) cooperate to bind the UUU-3′OH, protecting the RNA from degradation, while other domains may be important for RNA folding or other functions. Among the RNAPIII transcripts is ciliate telomerase RNA (TER). p65, a member of the LARP7 family, is an integral Tetrahymena thermophila telomerase holoenzyme protein required for TER biogenesis and telomerase RNP assembly. p65, together with TER and telomerase reverse transcriptase (TERT), form the Tetrahymena telomerase RNP catalytic core. p65 has an N-terminal domain followed by a La module and a C-terminal domain, which binds to the TER stem 4. We recently showed that the p65 C-terminal domain harbors a cryptic, atypical RRM, which uses a unique mode of single- and double-strand RNA binding and is required for telomerase RNP catalytic core assembly. This domain, which we named xRRM, appears to be present in and unique to genuine La and LARP7 proteins. Here we review the structure of the xRRM, discuss how this domain could recognize diverse substrates of La and LARP7 proteins and discuss the functional implications of the xRRM as an RNP chaperone. PMID:23328630

  12. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  13. Coded-aperture imaging of the Galactic center region at gamma-ray energies

    NASA Technical Reports Server (NTRS)

    Cook, Walter R.; Grunsfeld, John M.; Heindl, William A.; Palmer, David M.; Prince, Thomas A.

    1991-01-01

    The first coded-aperture images of the Galactic center region at energies above 30 keV have revealed two strong gamma-ray sources. One source has been identified with the X-ray source IE 1740.7 - 2942, located 0.8 deg away from the nucleus. If this source is at the distance of the Galactic center, it is one of the most luminous objects in the galaxy at energies from 35 to 200 keV. The second source is consistent in location with the X-ray source GX 354 + 0 (MXB 1728-34). In addition, gamma-ray flux from the location of GX 1 + 4 was marginally detected at a level consistent with other post-1980 measurements. No significant hard X-ray or gamma-ray flux was detected from the direction of the Galactic nucleus or from the direction of the recently discovered gamma-ray source GRS 1758-258.

  14. Development of a Grid-Based Gyro-Kinetic Simulation Code

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Brunetti, Maura; Tran, Trach-Minh; Brunner, Stephan

    2006-10-01

    A grid-based semi-Lagrangian code using cubic spline interpolation is being developed at CRPP, for solving the electrostatic drift-kinetic equations [M. Brunetti et. al, Comp. Phys. Comm. 163, 1 (2004)] in a cylindrical system. This 4-dim code, CYGNE, is part of a project with long term aim of studying microturbulence in toroidal fusion devices, in the more general frame of gyro-kinetic equations. Towards their non-linear phase, the simulations from this code are subject to significant overshoot problems, reflected by the development of negative value regions of the distribution function, which leads to bad energy conservation. This has motivated the study of alternative schemes. On the one hand, new time integration algorithms are considered in the semi-Lagrangian frame. On the other hand, fully Eulerian schemes, which separate time and space discretisation (method of lines), are investigated. In particular, the Essentially Non Oscillatory (ENO) approach, constructed so as to minimize the overshoot problem, has been considered. All these methods have first been tested in the simpler case of the 2-dim guiding-center model for the Kelvin-Helmholtz instability, which enables to address the specific issue of the E xB drift also met in the more complex gyrokinetic-type equations. Based on these preliminary studies, the most promising methods are being implemented and tested in CYGNE.

  15. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  16. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  17. Coding for urologic office procedures.

    PubMed

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  19. Obituary: Arthur Dodd Code (1923-2009)

    NASA Astrophysics Data System (ADS)

    Marché, Jordan D., II

    2009-12-01

    future course of stellar astronomy," a prediction strongly borne out in the decades that followed. In 1959, Code founded the Space Astronomy Laboratory (SAL) within the UW Department of Astronomy. Early photometric and spectrographic equipment was test-flown aboard NASA's X-15 rocket plane and Aerobee sounding rockets. Along with other SAL personnel, including Theodore E. Houck, Robert C. Bless, and John F. McNall, Code (as principal investigator) was responsible for the design of the Wisconsin Experiment Package (WEP) as one of two suites of instruments to be flown aboard the Orbiting Astronomical Observatory (OAO), which represented a milestone in the advent of space astronomy. With its seven reflecting telescopes feeding five filter photometers and two scanning spectrometers, WEP permitted the first extended observations in the UV portion of the spectrum. After the complete failure of the OAO-1 spacecraft (launched in 1966), OAO-2 was successfully launched on 7 December 1968 and gathered data on over a thousand celestial objects during the next 50 months, including stars, nebulae, galaxies, planets, and comets. These results appeared in a series of more than 40 research papers, chiefly in the Ap.J., along with the 1972 monograph, The Scientific Results from the Orbiting Astronomical Observatory (OAO-2), edited by Code. Between the OAO launches, other SAL colleagues of Code developed the Wisconsin Automatic Photoelectric Telescope (or APT), the first computer-controlled (or "robotic") telescope. Driven by a PDP-8 mini-computer, it routinely collected atmospheric extinction data. Code was also chosen principal investigator for the Wisconsin Ultraviolet Photo-Polarimeter Experiment (or WUPPE). This used a UV-sensitive polarimeter designed by Kenneth Nordsieck that was flown twice aboard the space shuttles in 1990 and 1995. Among other findings, WUPPE observations demonstrated that interstellar dust does not appreciably change the direction of polarization of starlight

  20. Micromechanics Analysis Code Post-Processing (MACPOST) User Guide. 1.0

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Comiskey, Michele D.; Bednarcyk, Brett A.

    1999-01-01

    As advanced composite materials have gained wider usage. the need for analytical models and computer codes to predict the thermomechanical deformation response of these materials has increased significantly. Recently, a micromechanics technique called the generalized method of cells (GMC) has been developed, which has the capability to fulfill this -oal. Tc provide a framework for GMC, the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) has been developed. As MAC/GMC has been updated, significant improvements have been made to the post-processing capabilities of the code. Through the MACPOST program, which operates directly within the MSC/PATRAN graphical pre- and post-processing package, a direct link between the analysis capabilities of MAC/GMC and the post-processing capabilities of MSC/PATRAN has been established. MACPOST has simplified the production, printing. and exportation of results for unit cells analyzed by MAC/GMC. MACPOST allows different micro-level quantities to be plotted quickly and easily in contour plots. In addition, meaningful data for X-Y plots can be examined. MACPOST thus serves as an important analysis and visualization tool for the macro- and micro-level data generated by MAC/GMC. This report serves as the user's manual for the MACPOST program.

  1. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  2. Critical Care Coding for Neurologists.

    PubMed

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  3. FERRET data analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.

  4. Certifying Auto-Generated Flight Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen

    2008-01-01

    Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm

  5. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  6. QR Code Mania!

    ERIC Educational Resources Information Center

    Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik

    2013-01-01

    space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…

  7. Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom

    ERIC Educational Resources Information Center

    Lin, Zheng

    2012-01-01

    This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…

  8. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  9. Preliminary Assessment of Turbomachinery Codes

    NASA Technical Reports Server (NTRS)

    Mazumder, Quamrul H.

    2007-01-01

    This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.

  10. Industrial Computer Codes

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1996-01-01

    This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.

  11. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  12. Generation of bright attosecond x-ray pulse trains via Thomson scattering from laser-plasma accelerators.

    PubMed

    Luo, W; Yu, T P; Chen, M; Song, Y M; Zhu, Z C; Ma, Y Y; Zhuo, H B

    2014-12-29

    Generation of attosecond x-ray pulse attracts more and more attention within the advanced light source user community due to its potentially wide applications. Here we propose an all-optical scheme to generate bright, attosecond hard x-ray pulse trains by Thomson backscattering of similarly structured electron beams produced in a vacuum channel by a tightly focused laser pulse. Design parameters for a proof-of-concept experiment are presented and demonstrated by using a particle-in-cell code and a four-dimensional laser-Compton scattering simulation code to model both the laser-based electron acceleration and Thomson scattering processes. Trains of 200 attosecond duration hard x-ray pulses holding stable longitudinal spacing with photon energies approaching 50 keV and maximum achievable peak brightness up to 1020 photons/s/mm2/mrad2/0.1%BW for each micro-bunch are observed. The suggested physical scheme for attosecond x-ray pulse trains generation may directly access the fastest time scales relevant to electron dynamics in atoms, molecules and materials.

  13. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  14. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  15. The design of the CMOS wireless bar code scanner applying optical system based on ZigBee

    NASA Astrophysics Data System (ADS)

    Chen, Yuelin; Peng, Jian

    2008-03-01

    The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.

  16. MSAT-X: A technical introduction and status report

    NASA Technical Reports Server (NTRS)

    Dessouky, Khaled; Sue, Miles

    1988-01-01

    A technical introduction and status report for the Mobile Satellite Experiment (MSAT-X) program is presented. The concepts of a Mobile Satellite System (MSS) and its unique challenges are introduced. MSAT-X's role and objectives are delineated with focus on its achievements. An outline of MSS design philosophy is followed by a presentation and analysis of the MSAT-X results, which are cast in a broader context of an MSS. The current phase of MSAT-X has focused notably on the ground segment of MSS. The accomplishments in the four critical technology areas of vehicle antennas, modem and mobile terminal design, speech coding, and networking are presented. A concise evolutionary trace is incorporated in each area to elucidate the rationale leading to the current design choices. The findings in the area of propagation channel modeling are also summarized and their impact on system design discussed. To facilitate the assessment of the MSAT-X results, technology and subsystem recommendations are also included and integrated with a quantitative first-generation MSS design.

  17. An upgraded x-ray spectroscopy diagnostic on MST.

    PubMed

    Clayton, D J; Almagri, A F; Burke, D R; Forest, C B; Goetz, J A; Kaufman, M C; O'Connell, R

    2010-10-01

    An upgraded x-ray spectroscopy diagnostic is used to measure the distribution of fast electrons in MST and to determine Z(eff) and the particle diffusion coefficient D(r). A radial array of 12 CdZnTe hard-x-ray detectors measures 10-150 keV Bremsstrahlung from fast electrons, a signature of reduced stochasticity and improved confinement in the plasma. A new Si soft-x-ray detector measures 2-10 keV Bremsstrahlung from thermal and fast electrons. The shaped output pulses from both detector types are digitized and the resulting waveforms are fit with Gaussians to resolve pileup and provide good time and energy resolution. Lead apertures prevent detector saturation and provide a well-known etendue, while lead shielding prevents pickup from stray x-rays. New Be vacuum windows transmit >2 keV x-rays, and additional Al and Be filters are sometimes used to reduce low energy flux for better resolution at higher energies. Measured spectra are compared to those predicted by the Fokker-Planck code CQL3D to deduce Z(eff) and D(r).

  18. Interface requirements to couple thermal hydraulics codes to severe accident codes: ICARE/CATHARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camous, F.; Jacq, F.; Chatelard, P.

    1997-07-01

    In order to describe with the same code the whole sequence of severe LWR accidents, up to the vessel failure, the Institute of Protection and Nuclear Safety has performed a coupling of the severe accident code ICARE2 to the thermalhydraulics code CATHARE2. The resulting code, ICARE/CATHARE, is designed to be as pertinent as possible in all the phases of the accident. This paper is mainly devoted to the description of the ICARE2-CATHARE2 coupling.

  19. NASA Rotor 37 CFD Code Validation: Glenn-HT Code

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2010-01-01

    In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.

  20. A final report to the Laboratory Directed Research and Development committee on Project 93-ERP-075: ``X-ray laser propagation and coherence: Diagnosing fast-evolving, high-density laser plasmas using X-ray lasers``

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, A.S.; Cauble, R.; Da Silva, L.B.

    1996-02-01

    This report summarizes the major accomplishments of this three-year Laboratory Directed Research and Development (LDRD) Exploratory Research Project (ERP) entitled ``X-ray Laser Propagation and Coherence: Diagnosing Fast-evolving, High-density Laser Plasmas Using X-ray Lasers,`` tracking code 93-ERP-075. The most significant accomplishment of this project is the demonstration of a new laser plasma diagnostic: a soft x-ray Mach-Zehnder interferometer using a neonlike yttrium x-ray laser at 155 {angstrom} as the probe source. Detailed comparisons of absolute two-dimensional electron density profiles obtained from soft x-ray laser interferograms and profiles obtained from radiation hydrodynamics codes, such as LASNEX, will allow us to validate andmore » benchmark complex numerical models used to study the physics of laser-plasma interactions. Thus the development of soft x-ray interferometry technique provides a mechanism to probe the deficiencies of the numerical models and is an important tool for, the high-energy density physics and science-based stockpile stewardship programs. The authors have used the soft x-ray interferometer to study a number of high-density, fast evolving, laser-produced plasmas, such as the dynamics of exploding foils and colliding plasmas. They are pursuing the application of the soft x-ray interferometer to study ICF-relevant plasmas, such as capsules and hohlraums, on the Nova 10-beam facility. They have also studied the development of enhanced-coherence, shorter-pulse-duration, and high-brightness x-ray lasers. The utilization of improved x-ray laser sources can ultimately enable them to obtain three-dimensional holographic images of laser-produced plasmas.« less

  1. Time-resolved hard x-ray spectrometer

    NASA Astrophysics Data System (ADS)

    Moy, Kenneth; Cuneo, Michael; McKenna, Ian; Keenan, Thomas; Sanford, Thomas; Mock, Ray

    2006-08-01

    Wired array studies are being conducted at the SNL Z accelerator to maximize the x-ray generation for inertial confinement fusion targets and high energy density physics experiments. An integral component of these studies is the characterization of the time-resolved spectral content of the x-rays. Due to potential spatial anisotropy in the emitted radiation, it is also critical to diagnose the time-evolved spectral content in a space-resolved manner. To accomplish these two measurement goals, we developed an x-ray spectrometer using a set of high-speed detectors (silicon PIN diodes) with a collimated field-of-view that converged on a 1-cm-diameter spot at the pinch axis. Spectral discrimination is achieved by placing high Z absorbers in front of these detectors. We built two spectrometers to permit simultaneous different angular views of the emitted radiation. Spectral data have been acquired from recent Z shots for the radial and axial (polar) views. UNSPEC 1 has been adapted to analyze and unfold the measured data to reconstruct the x-ray spectrum. The unfold operator code, UFO2, is being adapted for a more comprehensive spectral unfolding treatment.

  2. INDOS: conversational computer codes to implement ICRP-10-10A models for estimation of internal radiation dose to man

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Killough, G.G.; Rohwer, P.S.

    1974-03-01

    INDOS1, INDOS2, and INDOS3 (the INDOS codes) are conversational FORTRAN IV programs, implemented for use in time-sharing mode on the ORNL PDP-10 System. These codes use ICRP10-10A models to estimate the radiation dose to an organ of the body of Reference Man resulting from the ingestion or inhalation of any one of various radionuclides. Two patterns of intake are simulated: intakes at discrete times and continuous intake at a constant rate. The IND0S codes provide tabular output of dose rate and dose vs time, graphical output of dose vs time, and punched-card output of organ burden and dose vs time.more » The models of internal dose calculation are discussed and instructions for the use of the INDOS codes are provided. The INDOS codes are available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, P. O. Box X, Oak Ridge, Tennessee 37830. (auth)« less

  3. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

  4. The effects of sex-biased gene expression and X-linkage on rates of sequence evolution in Drosophila.

    PubMed

    Campos, José Luis; Johnston, Keira; Charlesworth, Brian

    2017-12-08

    A faster rate of adaptive evolution of X-linked genes compared with autosomal genes (the faster-X effect) can be caused by the fixation of recessive or partially recessive advantageous mutations. This effect should be largest for advantageous mutations that affect only male fitness, and least for mutations that affect only female fitness. We tested these predictions in Drosophila melanogaster by using coding and functionally significant non-coding sequences of genes with different levels of sex-biased expression. Consistent with theory, nonsynonymous substitutions in most male-biased and unbiased genes show faster adaptive evolution on the X. However, genes with very low recombination rates do not show such an effect, possibly as a consequence of Hill-Robertson interference. Contrary to expectation, there was a substantial faster-X effect for female-biased genes. After correcting for recombination rate differences, however, female-biased genes did not show a faster X-effect. Similar analyses of non-coding UTRs and long introns showed a faster-X effect for all groups of genes, other than introns of female-biased genes. Given the strong evidence that deleterious mutations are mostly recessive or partially recessive, we would expect a slower rate of evolution of X-linked genes for slightly deleterious mutations that become fixed by genetic drift. Surprisingly, we found little evidence for this after correcting for recombination rate, implying that weakly deleterious mutations are mostly close to being semidominant. This is consistent with evidence from polymorphism data, which we use to test how models of selection that assume semidominance with no sex-specific fitness effects may bias estimates of purifying selection. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... for Residential Construction in High Wind Regions. ICC 700: National Green Building Standard The..., coordinated, and necessary to regulate the built environment. Federal agencies frequently use these codes and... International Codes and Standards consist of the following: ICC Codes International Building Code. International...

  6. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-16

    ... for Residential Construction in High Wind Areas. ICC 700: National Green Building Standard. The... Codes and Standards that are comprehensive, coordinated, and necessary to regulate the built environment... International Codes and Standards consist of the following: ICC Codes International Building Code. International...

  7. DQE simulation of a-Se x-ray detectors using ARTEMIS

    NASA Astrophysics Data System (ADS)

    Fang, Yuan; Badano, Aldo

    2016-03-01

    Detective Quantum Efficiency (DQE) is one of the most important image quality metrics for evaluating the spatial resolution performance of flat-panel x-ray detectors. In this work, we simulate the DQE of amorphous selenium (a-Se) xray detectors with a detailed Monte Carlo transport code (ARTEMIS) for modeling semiconductor-based direct x-ray detectors. The transport of electron-hole pairs is achieved with a spatiotemporal model that accounts for recombination and trapping of carriers and Coulombic effects of space charge and external applied electric field. A range of x-ray energies has been simulated from 10 to 100 keV. The DQE results can be used to study the spatial resolution characteristics of detectors at different energies.

  8. Impacts of Model Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less

  9. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2017-12-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a) = s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  10. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2018-03-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a)=s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  11. The Diffuse Soft X-ray Background: Trials and Tribulations

    NASA Astrophysics Data System (ADS)

    Ulmer, Melville P.

    2013-01-01

    I joined the University of Wisconsin-Madison sounding rocket group at its inception. It was an exciting time, as nobody knew what the X-ray sky looked like. Our group focused on the soft X-ray background, and built proportional counters with super thin (2 micron thick) windows. As the inter gas pressure of the counters was about 1 atmosphere, it was no mean feat to get payload to launch without the window bursting. On top of that we built all our own software from space solutions to unfolding the spectral data. For we did it then as now: Our computer code modeled the detector response and then folded various spectral shapes through the response and compared the results with the raw data. As far as interpretation goes, here are examples of how one can get things wrong: The Berkeley group published a paper of the soft X-ray background that disagreed with ours. Why? It turned out they had **assumed** the galactic plane was completely opaque to soft X-ray and hence corrected for detector background that way. It turns out that the ISM emits in soft X-rays! Another example was the faux pas of the Calgary group. They didn’t properly shield their detector from the sounding rocket telemetry. Thus they got an enormous signal, which to our amusement some (ambulance chaser) theoreticians tried to explain! So back then as now, mistakes were made, but at least we all knew how our X-ray systems worked from soup (the detectors) to nuts (the data analysis code) where as toady “anybody” with a good idea but only a vague inkling of how detectors, mirrors and software work, can be an X-ray astronomer. On the one hand, this has made the field accessible to all, and on the other, errors in interpretation can be made as the X-ray telescope user can fall prey to running black box software. Furthermore with so much funding going into supporting observers, there is little left to make the necessary technology advances or keep the core expertise in place to even to stay even with

  12. Partially coherent X-ray wavefront propagation simulations including grazing-incidence focusing optics.

    PubMed

    Canestrari, Niccolo; Chubar, Oleg; Reininger, Ruben

    2014-09-01

    X-ray beamlines in modern synchrotron radiation sources make extensive use of grazing-incidence reflective optics, in particular Kirkpatrick-Baez elliptical mirror systems. These systems can focus the incoming X-rays down to nanometer-scale spot sizes while maintaining relatively large acceptance apertures and high flux in the focused radiation spots. In low-emittance storage rings and in free-electron lasers such systems are used with partially or even nearly fully coherent X-ray beams and often target diffraction-limited resolution. Therefore, their accurate simulation and modeling has to be performed within the framework of wave optics. Here the implementation and benchmarking of a wave-optics method for the simulation of grazing-incidence mirrors based on the local stationary-phase approximation or, in other words, the local propagation of the radiation electric field along geometrical rays, is described. The proposed method is CPU-efficient and fully compatible with the numerical methods of Fourier optics. It has been implemented in the Synchrotron Radiation Workshop (SRW) computer code and extensively tested against the geometrical ray-tracing code SHADOW. The test simulations have been performed for cases without and with diffraction at mirror apertures, including cases where the grazing-incidence mirrors can be hardly approximated by ideal lenses. Good agreement between the SRW and SHADOW simulation results is observed in the cases without diffraction. The differences between the simulation results obtained by the two codes in diffraction-dominated cases for illumination with fully or partially coherent radiation are analyzed and interpreted. The application of the new method for the simulation of wavefront propagation through a high-resolution X-ray microspectroscopy beamline at the National Synchrotron Light Source II (Brookhaven National Laboratory, USA) is demonstrated.

  13. Ab initio calculations of the structural, electronic, thermodynamic and thermal properties of BaSe1-x Te x alloys

    NASA Astrophysics Data System (ADS)

    Drablia, S.; Boukhris, N.; Boulechfar, R.; Meradji, H.; Ghemid, S.; Ahmed, R.; Omran, S. Bin; El Haj Hassan, F.; Khenata, R.

    2017-10-01

    The alkaline earth metal chalcogenides are being intensively investigated because of their advanced technological applications, for example in photoluminescent devices. In this study, the structural, electronic, thermodynamic and thermal properties of the BaSe1-x Te x alloys at alloying composition x = 0, 0.25, 0.50, 0.75 and 1 are investigated. The full potential linearized augmented plane wave plus local orbital method designed within the density functional theory was used to perform the total energy calculations. In this research work the effect of the composition on the results of the parameters and bulk modulus as well as on the band gap energy is analyzed. From our results, we found a deviation of the obtained results for the lattice constants from Vegard’s law as well as a deviation of the value of the bulk modulus from the linear concentration dependence. We also carried out a microscopic analysis of the origin of the band gap energy bowing parameter. Furthermore, the thermodynamic stability of the considered alloys was explored through the measurement of the miscibility critical temperature. The quasi-harmonic Debye model, as implemented in the Gibbs code, was used to predict the thermal properties of the BaSe1-x Te x alloys, and these investigations comprise our first theoretical predictions concerning the BaSe1-x Te x alloys.

  14. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1984-01-01

    Several error control coding techniques for reliable satellite communications were investigated to find algorithms for fast decoding of Reed-Solomon codes in terms of dual basis. The decoding of the (255,223) Reed-Solomon code, which is used as the outer code in the concatenated TDRSS decoder, was of particular concern.

  15. Ray-tracing critical-angle transmission gratings for the X-ray Surveyor and Explorer-size missions

    NASA Astrophysics Data System (ADS)

    Günther, Hans M.; Bautz, Marshall W.; Heilmann, Ralf K.; Huenemoerder, David P.; Marshall, Herman L.; Nowak, Michael A.; Schulz, Norbert S.

    2016-07-01

    We study a critical angle transmission (CAT) grating spectrograph that delivers a spectral resolution significantly above any X-ray spectrograph ever own. This new technology will allow us to resolve kinematic components in absorption and emission lines of galactic and extragalactic matter down to unprecedented dispersion levels. We perform ray-trace simulations to characterize the performance of the spectrograph in the context of an X-ray Surveyor or Arcus like layout (two mission concepts currently under study). Our newly developed ray-trace code is a tool suite to simulate the performance of X-ray observatories. The simulator code is written in Python, because the use of a high-level scripting language allows modifications of the simulated instrument design in very few lines of code. This is especially important in the early phase of mission development, when the performances of different configurations are contrasted. To reduce the run-time and allow for simulations of a few million photons in a few minutes on a desktop computer, the simulator code uses tabulated input (from theoretical models or laboratory measurements of samples) for grating efficiencies and mirror reflectivities. We find that the grating facet alignment tolerances to maintain at least 90% of resolving power that the spectrometer has with perfect alignment are (i) translation parallel to the optical axis below 0.5 mm, (ii) rotation around the optical axis or the groove direction below a few arcminutes, and (iii) constancy of the grating period to 1:105. Translations along and rotations around the remaining axes can be significantly larger than this without impacting the performance.

  16. Simulation of a complete X-ray digital radiographic system for industrial applications.

    PubMed

    Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H

    2018-05-19

    Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Visual search asymmetries within color-coded and intensity-coded displays.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  18. X-Ray Phase Imaging for Breast Cancer Detection

    DTIC Science & Technology

    2012-09-01

    the Gerchberg-Saxton algorithm in the Fresnel diffraction regime, and is much more robust against image noise than the TIE-based method. For details...developed efficient coding with the software modules for the image registration, flat-filed correction , and phase retrievals. In addition, we...X, Liu H. 2010. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging

  19. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    PubMed

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  20. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  1. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  2. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  3. A Supersonic Argon/Air Coaxial Jet Experiment for Computational Fluid Dynamics Code Validation

    NASA Technical Reports Server (NTRS)

    Clifton, Chandler W.; Cutler, Andrew D.

    2007-01-01

    A non-reacting experiment is described in which data has been acquired for the validation of CFD codes used to design high-speed air-breathing engines. A coaxial jet-nozzle has been designed to produce pressure-matched exit flows of Mach 1.8 at 1 atm in both a center jet of argon and a coflow jet of air, creating a supersonic, incompressible mixing layer. The flowfield was surveyed using total temperature, gas composition, and Pitot probes. The data set was compared to CFD code predictions made using Vulcan, a structured grid Navier-Stokes code, as well as to data from a previous experiment in which a He-O2 mixture was used instead of argon in the center jet of the same coaxial jet assembly. Comparison of experimental data from the argon flowfield and its computational prediction shows that the CFD produces an accurate solution for most of the measured flowfield. However, the CFD prediction deviates from the experimental data in the region downstream of x/D = 4, underpredicting the mixing-layer growth rate.

  4. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...

  5. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...

  6. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  7. Subgrid Scale Modeling in Solar Convection Simulations using the ASH Code

    NASA Technical Reports Server (NTRS)

    Young, Y.-N.; Miesch, M.; Mansour, N. N.

    2003-01-01

    The turbulent solar convection zone has remained one of the most challenging and important subjects in physics. Understanding the complex dynamics in the solar con- vection zone is crucial for gaining insight into the solar dynamo problem. Many solar observatories have generated revealing data with great details of large scale motions in the solar convection zone. For example, a strong di erential rotation is observed: the angular rotation is observed to be faster at the equator than near the poles not only near the solar surface, but also deep in the convection zone. On the other hand, due to the wide range of dynamical scales of turbulence in the solar convection zone, both theory and simulation have limited success. Thus, cutting edge solar models and numerical simulations of the solar convection zone have focused more narrowly on a few key features of the solar convection zone, such as the time-averaged di erential rotation. For example, Brun & Toomre (2002) report computational finding of differential rotation in an anelastic model for solar convection. A critical shortcoming in this model is that the viscous dissipation is based on application of mixing length theory to stellar dynamics with some ad hoc parameter tuning. The goal of our work is to implement the subgrid scale model developed at CTR into the solar simulation code and examine how the differential rotation will be a affected as a result. Specifically, we implement a Smagorinsky-Lilly subgrid scale model into the ASH (anelastic spherical harmonic) code developed over the years by various authors. This paper is organized as follows. In x2 we briefly formulate the anelastic system that describes the solar convection. In x3 we formulate the Smagorinsky-Lilly subgrid scale model for unstably stratifed convection. We then present some preliminary results in x4, where we also provide some conclusions and future directions.

  8. Superluminal Labview Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheat, Robert; Marksteiner, Quinn; Quenzer, Jonathan

    2012-03-26

    This labview code is used to set the phase and amplitudes on the 72 antenna of the superluminal machine, and to map out the radiation patter from the superluminal antenna.Each antenna radiates a modulated signal consisting of two separate frequencies, in the range of 2 GHz to 2.8 GHz. The phases and amplitudes from each antenna are controlled by a pair of AD8349 vector modulators (VMs). These VMs set the phase and amplitude of a high frequency signal using a set of four DC inputs, which are controlled by Linear Technologies LTC1990 digital to analog converters (DACs). The labview codemore » controls these DACs through an 8051 microcontroller.This code also monitors the phases and amplitudes of the 72 channels. Near each antenna, there is a coupler that channels a portion of the power into a binary network. Through a labview controlled switching array, any of the 72 coupled signals can be channeled in to the Tektronix TDS 7404 digital oscilloscope. Then the labview code takes an FFT of the signal, and compares it to the FFT of a reference signal in the oscilloscope to determine the magnitude and phase of each sideband of the signal. The code compensates for phase and amplitude errors introduced by differences in cable lengths.The labview code sets each of the 72 elements to a user determined phase and amplitude. For each element, the code runs an iterative procedure, where it adjusts the DACs until the correct phases and amplitudes have been reached.« less

  9. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  10. Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burk, K.W.; Andrews, G.L.

    1989-02-01

    The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less

  11. Energy Codes at a Glance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Pamala C.; Richman, Eric E.

    2008-09-01

    Feeling dim from energy code confusion? Read on to give your inspections a charge. The U.S. Department of Energy’s Building Energy Codes Program addresses hundreds of inquiries from the energy codes community every year. This article offers clarification for topics of confusion submitted to BECP Technical Support of interest to electrical inspectors, focusing on the residential and commercial energy code requirements based on the most recently published 2006 International Energy Conservation Code® and ANSI/ASHRAE/IESNA1 Standard 90.1-2004.

  12. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  13. Bring out your codes! Bring out your codes! (Increasing Software Visibility and Re-use)

    NASA Astrophysics Data System (ADS)

    Allen, A.; Berriman, B.; Brunner, R.; Burger, D.; DuPrie, K.; Hanisch, R. J.; Mann, R.; Mink, J.; Sandin, C.; Shortridge, K.; Teuben, P.

    2013-10-01

    Progress is being made in code discoverability and preservation, but as discussed at ADASS XXI, many codes still remain hidden from public view. With the Astrophysics Source Code Library (ASCL) now indexed by the SAO/NASA Astrophysics Data System (ADS), the introduction of a new journal, Astronomy & Computing, focused on astrophysics software, and the increasing success of education efforts such as Software Carpentry and SciCoder, the community has the opportunity to set a higher standard for its science by encouraging the release of software for examination and possible reuse. We assembled representatives of the community to present issues inhibiting code release and sought suggestions for tackling these factors. The session began with brief statements by panelists; the floor was then opened for discussion and ideas. Comments covered a diverse range of related topics and points of view, with apparent support for the propositions that algorithms should be readily available, code used to produce published scientific results should be made available, and there should be discovery mechanisms to allow these to be found easily. With increased use of resources such as GitHub (for code availability), ASCL (for code discovery), and a stated strong preference from the new journal Astronomy & Computing for code release, we expect to see additional progress over the next few years.

  14. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  15. Civil Code, 11 December 1987.

    PubMed

    1988-01-01

    Article 162 of this Mexican Code provides, among other things, that "Every person has the right freely, responsibly, and in an informed fashion to determine the number and spacing of his or her children." When a marriage is involved, this right is to be observed by the spouses "in agreement with each other." The civil codes of the following states contain the same provisions: 1) Baja California (Art. 159 of the Civil Code of 28 April 1972 as revised in Decree No. 167 of 31 January 1974); 2) Morelos (Art. 255 of the Civil Code of 26 September 1949 as revised in Decree No. 135 of 29 December 1981); 3) Queretaro (Art. 162 of the Civil Code of 29 December 1950 as revised in the Act of 9 January 1981); 4) San Luis Potosi (Art. 147 of the Civil Code of 24 March 1946 as revised in 13 June 1978); Sinaloa (Art. 162 of the Civil Code of 18 June 1940 as revised in Decree No. 28 of 14 October 1975); 5) Tamaulipas (Art. 146 of the Civil Code of 21 November 1960 as revised in Decree No. 20 of 30 April 1975); 6) Veracruz-Llave (Art. 98 of the Civil Code of 1 September 1932 as revised in the Act of 30 December 1975); and 7) Zacatecas (Art. 253 of the Civil Code of 9 February 1965 as revised in Decree No. 104 of 13 August 1975). The Civil Codes of Puebla and Tlaxcala provide for this right only in the context of marriage with the spouses in agreement. See Art. 317 of the Civil Code of Puebla of 15 April 1985 and Article 52 of the Civil Code of Tlaxcala of 31 August 1976 as revised in Decree No. 23 of 2 April 1984. The Family Code of Hidalgo requires as a formality of marriage a certification that the spouses are aware of methods of controlling fertility, responsible parenthood, and family planning. In addition, Article 22 the Civil Code of the Federal District provides that the legal capacity of natural persons is acquired at birth and lost at death; however, from the moment of conception the individual comes under the protection of the law, which is valid with respect to the

  16. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  17. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    PubMed

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  18. Long non-coding RNA and Polycomb: an intricate partnership in cancer biology.

    PubMed

    Achour, Cyrinne; Aguilo, Francesca

    2018-06-01

    High-throughput analyses have revealed that the vast majority of the transcriptome does not code for proteins. These non-translated transcripts, when larger than 200 nucleotides, are termed long non-coding RNAs (lncRNAs), and play fundamental roles in diverse cellular processes. LncRNAs are subject to dynamic chemical modification, adding another layer of complexity to our understanding of the potential roles that lncRNAs play in health and disease. Many lncRNAs regulate transcriptional programs by influencing the epigenetic state through direct interactions with chromatin-modifying proteins. Among these proteins, Polycomb repressive complexes 1 and 2 (PRC1 and PRC2) have been shown to be recruited by lncRNAs to silence target genes. Aberrant expression, deficiency or mutation of both lncRNA and Polycomb have been associated with numerous human diseases, including cancer. In this review, we have highlighted recent findings regarding the concerted mechanism of action of Polycomb group proteins (PcG), acting together with some classically defined lncRNAs including X-inactive specific transcript ( XIST ), antisense non-coding RNA in the INK4 locus ( ANRIL ), metastasis associated lung adenocarcinoma transcript 1 ( MALAT1 ), and HOX transcript antisense RNA ( HOTAIR ).

  19. Code Switching and Code Superimposition in Music. Working Papers in Sociolinguistics, No. 63.

    ERIC Educational Resources Information Center

    Slobin, Mark

    This paper illustrates how the sociolinguistic concept of code switching applies to the use of different styles of music. The two bases for the analogy are Labov's definition of code-switching as "moving from one consistent set of co-occurring rules to another," and the finding of sociolinguistics that code switching tends to be part of…

  20. Ethics and the Early Childhood Educator: Using the NAEYC Code. 2005 Code Edition

    ERIC Educational Resources Information Center

    Freeman, Nancy; Feeney, Stephanie

    2005-01-01

    With updated language and references to the 2005 revision of the Code of Ethical Conduct, this book, like the NAEYC Code of Ethical Conduct, seeks to inform, not prescribe, answers to tough questions that teachers face as they work with children, families, and colleagues. To help everyone become well acquainted with the Code and use it in one's…

  1. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  2. Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharrati, Hedi; Agrebi, Amel; Karaoui, Mohamed-Karim

    2007-04-15

    X-ray buildup factors of lead in broad beam geometry for energies from 15 to 150 keV are determined using the general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C). The obtained buildup factors data are fitted to a modified three parameter Archer et al. model for ease in calculating the broad beam transmission with computer at any tube potentials/filters combinations in diagnostic energies range. An example for their use to compute the broad beam transmission at 70, 100, 120, and 140 kVp is given. The calculated broad beam transmission is compared to data derived from literature, presenting good agreement.more » Therefore, the combination of the buildup factors data as determined and a mathematical model to generate x-ray spectra provide a computationally based solution to broad beam transmission for lead barriers in shielding x-ray facilities.« less

  3. Consequences of narrow cyclotron emission from Hercules X-1

    NASA Technical Reports Server (NTRS)

    Weaver, R. P.

    1978-01-01

    The implications of the recent observations of a narrow cyclotron line in the hard X-ray spectrum of Hercules X-1 are studied. A Monte Carlo code is used to simulate the X-ray transfer of an intrinsically narrow feature at approximately 56 keV through an opaque, cold magnetospheric shell. The results of this study indicate that if a narrow line can be emitted by the source region, then only about 10% of the photons remain in a narrow feature after scattering through the shell. The remaining photons are scattered into a broad feature (FWHM approximately 30 keV) that peaks near 20 keV. Thus, these calculations indicate that the intrinsic source luminosity of the cyclotron line is at least an order of magnitude greater than the observed luminosity.

  4. Top ten reasons to register your code with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Berriman, G. Bruce; Mink, Jessica D.; Nemiroff, Robert J.; Robitaille, Thomas; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Teuben, Peter J.; Wallin, John F.; Warmels, Rein

    2017-01-01

    With 1,400 codes, the Astrophysics Source Code Library (ASCL, ascl.net) is the largest indexed resource for codes used in astronomy research in existence. This free online registry was established in 1999, is indexed by Web of Science and ADS, and is citable, with citations to its entries tracked by ADS. Registering your code with the ASCL is easy with our online submissions system. Making your software available for examination shows confidence in your research and makes your research more transparent, reproducible, and falsifiable. ASCL registration allows your software to be cited on its own merits and provides a citation that is trackable and accepted by all astronomy journals and journals such as Science and Nature. Registration also allows others to find your code more easily. This presentation covers the benefits of registering astronomy research software with the ASCL.

  5. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  6. Genetic Code Analysis Toolkit: A novel tool to explore the coding properties of the genetic code and DNA sequences

    NASA Astrophysics Data System (ADS)

    Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.

    2018-01-01

    The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/

  7. Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.

    2013-10-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  8. Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, S R; Bihari, B L; Salari, K

    As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.

  9. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  10. Coding OSICS sports injury diagnoses in epidemiological studies: does the background of the coder matter?

    PubMed

    Finch, Caroline F; Orchard, John W; Twomey, Dara M; Saad Saleem, Muhammad; Ekegren, Christina L; Lloyd, David G; Elliott, Bruce C

    2014-04-01

    To compare Orchard Sports Injury Classification System (OSICS-10) sports medicine diagnoses assigned by a clinical and non-clinical coder. Assessment of intercoder agreement. Community Australian football. 1082 standardised injury surveillance records. Direct comparison of the four-character hierarchical OSICS-10 codes assigned by two independent coders (a sports physician and an epidemiologist). Adjudication by a third coder (biomechanist). The coders agreed on the first character 95% of the time and on the first two characters 86% of the time. They assigned the same four-digit OSICS-10 code for only 46% of the 1082 injuries. The majority of disagreements occurred for the third character; 85% were because one coder assigned a non-specific 'X' code. The sports physician code was deemed correct in 53% of cases and the epidemiologist in 44%. Reasons for disagreement included the physician not using all of the collected information and the epidemiologist lacking specific anatomical knowledge. Sports injury research requires accurate identification and classification of specific injuries and this study found an overall high level of agreement in coding according to OSICS-10. The fact that the majority of the disagreements occurred for the third OSICS character highlights the fact that increasing complexity and diagnostic specificity in injury coding can result in a loss of reliability and demands a high level of anatomical knowledge. Injury report form details need to reflect this level of complexity and data management teams need to include a broad range of expertise.

  11. Codes and morals: is there a missing link? (The Nuremberg Code revisited).

    PubMed

    Hick, C

    1998-01-01

    Codes are a well known and popular but weak form of ethical regulation in medical practice. There is, however, a lack of research on the relations between moral judgments and ethical Codes, or on the possibility of morally justifying these Codes. Our analysis begins by showing, given the Nuremberg Code, how a typical reference to natural law has historically served as moral justification. We then indicate, following the analyses of H. T. Engelhardt, Jr., and A. MacIntyre, why such general moral justifications of codes must necessarily fail in a society of "moral strangers." Going beyond Engelhardt we argue, that after the genealogical suspicion in morals raised by Nietzsche, not even Engelhardt's "principle of permission" can be rationally justified in a strong sense--a problem of transcendental argumentation in morals already realized by I. Kant. Therefore, we propose to abandon the project of providing general justifications for moral judgements and to replace it with a hermeneutical analysis of ethical meanings in real-world situations, starting with the archetypal ethical situation, the encounter with the Other (E. Levinas).

  12. The atmospheric structures of the companion stars of eclipsing binary x ray sources

    NASA Technical Reports Server (NTRS)

    Clark, George W.

    1992-01-01

    This investigation was aimed at determining structural features of the atmospheres of the massive early-type companion stars of eclipse x-ray pulsars by measurement of the attenuation of the x-ray spectrum during eclipse transitions and in deep eclipse. Several extended visits were made to ISAS in Japan by G. Clark and his graduate student, Jonathan Woo to coordinate the Ginga observations and preliminary data reduction, and to work with the Japanese host scientist, Fumiaki Nagase, in the interpretation of the data. At MIT extensive developments were made in software systems for data interpretation. In particular, a Monte Carlo code was developed for a 3-D simulation of the propagation of x-rays from the neutron star through the ionized atmosphere of the companion. With this code it was possible to determine the spectrum of Compton-scattered x-rays in deep eclipse and to subtract that component from the observed spectra, thereby isolating the software component that is attributable in large measure to x-rays that have been scattered by interstellar grains. This research has culminated in the submission of paper to the Astrophysical Journal on the determination of properties of the atmosphere of QV Nor, the BOI companion of 4U 1538-52, and the properties of interstellar dust grains along the line of sight from the source. The latter results were an unanticipated byproduct of the investigation. Data from Ginga observations of the Magellanic binaries SMC X-1 and LMC X-4 are currently under investigation as the PhD thesis project of Jonathan Woo who anticipated completion in the spring of 1993.

  13. The neutral emergence of error minimized genetic codes superior to the standard genetic code.

    PubMed

    Massey, Steven E

    2016-11-07

    The standard genetic code (SGC) assigns amino acids to codons in such a way that the impact of point mutations is reduced, this is termed 'error minimization' (EM). The occurrence of EM has been attributed to the direct action of selection, however it is difficult to explain how the searching of alternative codes for an error minimized code can occur via codon reassignments, given that these are likely to be disruptive to the proteome. An alternative scenario is that EM has arisen via the process of genetic code expansion, facilitated by the duplication of genes encoding charging enzymes and adaptor molecules. This is likely to have led to similar amino acids being assigned to similar codons. Strikingly, we show that if during code expansion the most similar amino acid to the parent amino acid, out of the set of unassigned amino acids, is assigned to codons related to those of the parent amino acid, then genetic codes with EM superior to the SGC easily arise. This scheme mimics code expansion via the gene duplication of charging enzymes and adaptors. The result is obtained for a variety of different schemes of genetic code expansion and provides a mechanistically realistic manner in which EM has arisen in the SGC. These observations might be taken as evidence for self-organization in the earliest stages of life. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. NR-code: Nonlinear reconstruction code

    NASA Astrophysics Data System (ADS)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  15. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  16. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Gaarder, N. T.; Lin, S.

    1986-01-01

    This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.

  17. Bandwidth efficient CCSDS coding standard proposals

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan

    1992-01-01

    The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.

  18. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  19. Nonlinear, nonbinary cyclic group codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.

  20. The Proteus Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Bui, Trong T.; Cavicchi, Richard H.; Conley, Julianne M.; Molls, Frank B.; Schwab, John R.

    1992-01-01

    An effort is currently underway at NASA Lewis to develop two- and three-dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. The emphasis in the development of Proteus is not algorithm development or research on numerical methods, but rather the development of the code itself. The objective is to develop codes that are user-oriented, easily-modified, and well-documented. Well-proven, state-of-the-art solution algorithms are being used. Code readability, documentation (both internal and external), and validation are being emphasized. This paper is a status report on the Proteus development effort. The analysis and solution procedure are described briefly, and the various features in the code are summarized. The results from some of the validation cases that have been run are presented for both the two- and three-dimensional codes.

  1. X-ray optical simulations supporting advanced commissioning of the coherent hard x-ray beamline at NSLS-II

    NASA Astrophysics Data System (ADS)

    Wiegart, L.; Rakitin, M.; Fluerasu, A.; Chubar, O.

    2017-08-01

    We present the application of fully- and partially-coherent synchrotron radiation wavefront propagation simulation functions, implemented in the "Synchrotron Radiation Workshop" computer code, to create a `virtual beamline' mimicking the Coherent Hard X-ray scattering beamline at NSLS-II. The beamline simulation includes all optical beamline components, such as the insertion device, mirror with metrology data, slits, double crystal monochromator and refractive focusing elements (compound refractive lenses and kinoform lenses). A feature of this beamline is the exploitation of X-ray beam coherence, boosted by the low-emittance NSLS-II storage-ring, for techniques such as X-ray Photon Correlation Spectroscopy or Coherent Diffraction Imaging. The key performance parameters are the degree of Xray beam coherence and photon flux, and the trade-off between them needs to guide the beamline settings for specific experimental requirements. Simulations of key performance parameters are compared to measurements obtained during beamline commissioning, and include the spectral flux of the undulator source, the degree of transverse coherence as well as focal spot sizes.

  2. Structural aspects of the inactive X chromosome.

    PubMed

    Bonora, Giancarlo; Disteche, Christine M

    2017-11-05

    A striking difference between male and female nuclei was recognized early on by the presence of a condensed chromatin body only in female cells. Mary Lyon proposed that X inactivation or silencing of one X chromosome at random in females caused this structural difference. Subsequent studies have shown that the inactive X chromosome (Xi) does indeed have a very distinctive structure compared to its active counterpart and all autosomes in female mammals. In this review, we will recap the discovery of this fascinating biological phenomenon and seminal studies in the field. We will summarize imaging studies using traditional microscopy and super-resolution technology, which revealed uneven compaction of the Xi. We will then discuss recent findings based on high-throughput sequencing techniques, which uncovered the distinct three-dimensional bipartite configuration of the Xi and the role of specific long non-coding RNAs in eliciting and maintaining this structure. The relative position of specific genomic elements, including genes that escape X inactivation, repeat elements and chromatin features, will be reviewed. Finally, we will discuss the position of the Xi, either near the nuclear periphery or the nucleolus, and the elements implicated in this positioning.This article is part of the themed issue 'X-chromosome inactivation: a tribute to Mary Lyon'. © 2017 The Authors.

  3. Serial-data correlator/code translator

    NASA Technical Reports Server (NTRS)

    Morgan, L. E.

    1977-01-01

    System, consisting of sampling flip flop, memory (either RAM or ROM), and memory buffer, correlates sampled data with predetermined acceptance code patterns, translates acceptable code patterns to nonreturn-to-zero code, and identifies data dropouts.

  4. Circular codes revisited: a statistical approach.

    PubMed

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  6. Experimental characterization of an ultra-fast Thomson scattering x-ray source with three-dimensional time and frequency-domain analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuba, J; Slaughter, D R; Fittinghoff, D N

    We present a detailed comparison of the measured characteristics of Thomson backscattered x-rays produced at the PLEIADES (Picosecond Laser-Electron Interaction for the Dynamic Evaluation of Structures) facility at Lawrence Livermore National Laboratory to predicted results from a newly developed, fully three-dimensional time and frequency-domain code. Based on the relativistic differential cross section, this code has the capability to calculate time and space dependent spectra of the x-ray photons produced from linear Thomson scattering for both bandwidth-limited and chirped incident laser pulses. Spectral broadening of the scattered x-ray pulse resulting from the incident laser bandwidth, perpendicular wave vector components in themore » laser focus, and the transverse and longitudinal phase space of the electron beam are included. Electron beam energy, energy spread, and transverse phase space measurements of the electron beam at the interaction point are presented, and the corresponding predicted x-ray characteristics are determined. In addition, time-integrated measurements of the x-rays produced from the interaction are presented, and shown to agree well with the simulations.« less

  7. Continuous Codes and Standards Improvement (CCSI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivkin, Carl H; Burgess, Robert M; Buttner, William J

    2015-10-21

    As of 2014, the majority of the codes and standards required to initially deploy hydrogen technologies infrastructure in the United States have been promulgated. These codes and standards will be field tested through their application to actual hydrogen technologies projects. Continuous codes and standards improvement (CCSI) is a process of identifying code issues that arise during project deployment and then developing codes solutions to these issues. These solutions would typically be proposed amendments to codes and standards. The process is continuous because as technology and the state of safety knowledge develops there will be a need to monitor the applicationmore » of codes and standards and improve them based on information gathered during their application. This paper will discuss code issues that have surfaced through hydrogen technologies infrastructure project deployment and potential code changes that would address these issues. The issues that this paper will address include (1) setback distances for bulk hydrogen storage, (2) code mandated hazard analyses, (3) sensor placement and communication, (4) the use of approved equipment, and (5) system monitoring and maintenance requirements.« less

  8. Rolf Mewe: a career devoted to X-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Kaastra, Jelle S.; Mewe, Rolf

    2005-06-01

    An overview of the life and work of Rolf Mewe (1935-2004) as an X-ray spectroscopist is given. He was one of the pioneers in the field of X-ray spectroscopy. His work illustrates nicely how this field developed from the early days up to the present high-resolution era. His plasma emission codes, developed by him and collaborators over several decades, is one of the most widely used. His thorough knowledge of the field, as well as his ability and enthousiasm to cooperate with many colleagues, made his career a succes. He will be missed by all of us for his work and personality.

  9. Evidence for Phex haploinsufficiency in murine X-linked hypophosphatemia.

    PubMed

    Wang, L; Du, L; Ecarot, B

    1999-04-01

    Mutations in the PHEX gene (phosphate-regulating gene with homology to endopeptidases on the X-chromosome) are responsible for X-linked hypophosphatemia (HYP). We previously reported the full-length coding sequence of murine Phex cDNA and provided evidence of Phex expression in bone and tooth. Here, we report the cloning of the entire 3.5-kb 3'UTR of the Phex gene, yielding a total of 6248 bp for the Phex transcript. Southern blot and RT-PCR analyses revealed that the 3' end of the coding sequence and the 3'UTR of the Phex gene, spanning exons 16 to 22, are deleted in Hyp, the mouse model for HYP. Northern blot analysis of bone revealed lack of expression of stable Phex mRNA from the mutant allele and expression of Phex transcripts from the wild-type allele in Hyp heterozygous females. Expression of the Phex protein in heterozygotes was confirmed by Western analysis with antibodies raised against a COOH-terminal peptide of the mouse Phex protein. Taken together, these results indicate that the dominant pattern of Hyp inheritance in mice is due to Phex haploinsufficiency.

  10. Kinetic Modeling of Ultraintense X-Ray Laser-Matter Interactions

    NASA Astrophysics Data System (ADS)

    Royle, Ryan; Sentoku, Yasuhiko; Mancini, Roberto; Johzaki, Tomoyuki

    2015-11-01

    High-intensity XFELs have become a novel way of creating and studying hot dense plasmas. The LCLS at Stanford can deliver a millijoule of energy with more than 1012 photons in a ~ 100 femtosecond pulse. By tightly focusing the beam to a micron-scale spot size, the XFEL can be intensified to more than 1018 W/cm2, making it possible to heat solid matter isochorically beyond a million degrees (>100 eV). Such extreme states of matter are of considerable interest due to their relevance to astrophysical plasmas. Additionally, they will allow novel ways of studying equation-of-state and opacity physics under Gbar pressure and strong fields. Photoionization is the dominant x-ray absorption mechanism and triggers the heating processes. A photoionization model that takes into account the subshell cross-sections has been developed in a kinetic plasma simulation code, PICLS, that solves the x-ray transport self-consistently. The XFEL-matter interaction with several elements, including solid carbon, aluminum, and iron, is studied with the code, and the results are compared with recent LCLS experiments. This work was supported by the DOE/OFES under Contract No. DE-SC0008827.

  11. Line x-ray source for diffraction enhanced imaging in clinical and industrial applications

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqin

    Mammography is one type of imaging modalities that uses a low-dose x-ray or other radiation sources for examination of breasts. It plays a central role in early detection of breast cancers. The material similarity of tumor-cell and health cell, breast implants surgery and other factors, make the breast cancers hard to visualize and detect. Diffraction enhanced imaging (DEI), first proposed and investigated by D. Chapman is a new x-ray radiographic imaging modality using monochromatic x-rays from a synchrotron source, which produced images of thick absorbing objects that are almost completely free of scatter. It shows dramatically improved contrast over standard imaging when applied to the same phantom. The contrast is based not only on attenuation but also on the refraction and diffraction properties of the sample. This imaging method may improve image quality of mammography, other medical applications, industrial radiography for non-destructive testing and x-ray computed tomography. However, the size, and cost, of a synchrotron source limits the application of the new modality to be applicable at clinical levels. This research investigates the feasibility of a designed line x-ray source to produce intensity compatible to synchrotron sources. It is composed of a 2-cm in length tungsten filament, installed on a carbon steel filament cup (backing plate), as the cathode and a stationary oxygen-free copper anode with molybdenum coating on the front surface serves as the target. Characteristic properties of the line x-ray source were computationally studied and the prototype was experimentally investigated. SIMIION code was used to computationally study the electron trajectories emanating from the filament towards the molybdenum target. A Faraday cup on the prototype device, proof-of-principle, was used to measure the distribution of electrons on the target, which compares favorably to computational results. The intensities of characteristic x-ray for molybdenum

  12. QR code for medical information uses.

    PubMed

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  13. Lean coding machine. Facilities target productivity and job satisfaction with coding automation.

    PubMed

    Rollins, Genna

    2010-07-01

    Facilities are turning to coding automation to help manage the volume of electronic documentation, streamlining workflow, boosting productivity, and increasing job satisfaction. As EHR adoption increases, computer-assisted coding may become a necessity, not an option.

  14. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  15. Development of the X-33 Aerodynamic Uncertainty Model

    NASA Technical Reports Server (NTRS)

    Cobleigh, Brent R.

    1998-01-01

    An aerodynamic uncertainty model for the X-33 single-stage-to-orbit demonstrator aircraft has been developed at NASA Dryden Flight Research Center. The model is based on comparisons of historical flight test estimates to preflight wind-tunnel and analysis code predictions of vehicle aerodynamics documented during six lifting-body aircraft and the Space Shuttle Orbiter flight programs. The lifting-body and Orbiter data were used to define an appropriate uncertainty magnitude in the subsonic and supersonic flight regions, and the Orbiter data were used to extend the database to hypersonic Mach numbers. The uncertainty data consist of increments or percentage variations in the important aerodynamic coefficients and derivatives as a function of Mach number along a nominal trajectory. The uncertainty models will be used to perform linear analysis of the X-33 flight control system and Monte Carlo mission simulation studies. Because the X-33 aerodynamic uncertainty model was developed exclusively using historical data rather than X-33 specific characteristics, the model may be useful for other lifting-body studies.

  16. Energetic Particle Loss Estimates in W7-X

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Akaslompolo, Simppa; Drevlak, Micheal; Wolf, Robert; Darrow, Douglass; Gates, David; W7-X Team

    2017-10-01

    The collisionless loss of high energy H+ and D+ ions in the W7-X device are examined using the BEAMS3D code. Simulations of collisionless losses are performed for a large ensemble of particles distributed over various flux surfaces. A clear loss cone of particles is present in the distribution for all particles. These simulations are compared against slowing down simulations in which electron impact, ion impact, and pitch angle scattering are considered. Full device simulations allow tracing of particle trajectories to the first wall components. These simulations provide estimates for placement of a novel set of energetic particle detectors. Recent performance upgrades to the code are allowing simulations with > 1000 processors providing high fidelity simulations. Speedup and future works are discussed. DE-AC02-09CH11466.

  17. An MCNP-based model of a medical linear accelerator x-ray photon beam.

    PubMed

    Ajaj, F A; Ghassal, N M

    2003-09-01

    The major components in the x-ray photon beam path of the treatment head of the VARIAN Clinac 2300 EX medical linear accelerator were modeled and simulated using the Monte Carlo N-Particle radiation transport computer code (MCNP). Simulated components include x-ray target, primary conical collimator, x-ray beam flattening filter and secondary collimators. X-ray photon energy spectra and angular distributions were calculated using the model. The x-ray beam emerging from the secondary collimators were scored by considering the total x-ray spectra from the target as the source of x-rays at the target position. The depth dose distribution and dose profiles at different depths and field sizes have been calculated at a nominal operating potential of 6 MV and found to be within acceptable limits. It is concluded that accurate specification of the component dimensions, composition and nominal accelerating potential gives a good assessment of the x-ray energy spectra.

  18. X-ray radiative transfer in protoplanetary disks. The role of dust and X-ray background fields

    NASA Astrophysics Data System (ADS)

    Rab, Ch.; Güdel, M.; Woitke, P.; Kamp, I.; Thi, W.-F.; Min, M.; Aresu, G.; Meijerink, R.

    2018-01-01

    Context. The X-ray luminosities of T Tauri stars are about two to four orders of magnitude higher than the luminosity of the contemporary Sun. As these stars are born in clusters, their disks are not only irradiated by their parent star but also by an X-ray background field produced by the cluster members. Aims: We aim to quantify the impact of X-ray background fields produced by young embedded clusters on the chemical structure of disks. Further, we want to investigate the importance of the dust for X-ray radiative transfer in disks. Methods: We present a new X-ray radiative transfer module for the radiation thermo-chemical disk code PRODIMO (PROtoplanetary DIsk MOdel), which includes X-ray scattering and absorption by both the gas and dust component. The X-ray dust opacities can be calculated for various dust compositions and dust-size distributions. For the X-ray radiative transfer we consider irradiation by the star and by X-ray background fields. To study the impact of X-rays on the chemical structure of disks we use the well established disk ionization tracers N2H+ and HCO+. Results: For evolved dust populations (e.g. grain growth), X-ray opacities are mostly dominated by the gas; only for photon energies E ≳ 5-10 keV do dust opacities become relevant. Consequently the local disk X-ray radiation field is only affected in dense regions close to the disk midplane. X-ray background fields can dominate the local X-ray disk ionization rate for disk radii r ≳ 20 au. However, the N2H+ and HCO+ column densities are only significantly affected in cases of low cosmic-ray ionization rates (≲10-19 s-1), or if the background flux is at least a factor of ten higher than the flux level of ≈10-5 erg cm-2 s-1 expected for clusters typical for the solar vicinity. Conclusions: Observable signatures of X-ray background fields in low-mass star-formation regions, like Taurus, are only expected for cluster members experiencing a strong X-ray background field (e.g. due to

  19. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  20. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  1. Aeroheating Predictions for X-34 Using an Inviscid-Boundary Layer Method

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Kleb, William L.; Alter, Steven J.

    1998-01-01

    Radiative equilibrium surface temperatures and surface heating rates from a combined inviscid-boundary layer method are presented for the X-34 Reusable Launch Vehicle for several points along the hypersonic descent portion of its trajectory. Inviscid, perfect-gas solutions are generated with the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and the Data-Parallel Lower-Upper Relaxation (DPLUR) code. Surface temperatures and heating rates are then computed using the Langley Approximate Three-Dimensional Convective Heating (LATCH) engineering code employing both laminar and turbulent flow models. The combined inviscid-boundary layer method provides accurate predictions of surface temperatures over most of the vehicle and requires much less computational effort than a Navier-Stokes code. This enables the generation of a more thorough aerothermal database which is necessary to design the thermal protection system and specify the vehicle's flight limits.

  2. Initial performances of first undulator-based hard x-ray beamlines of NSLS-II compared to simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubar, Oleg, E-mail: chubar@bnl.gov; Chu, Yong S.; Huang, Xiaojing

    2016-07-27

    Commissioning of the first X-ray beamlines of NSLS-II included detailed measurements of spectral and spatial distributions of the radiation at different locations of the beamlines, from front-ends to sample positions. Comparison of some of these measurement results with high-accuracy calculations of synchrotron (undulator) emission and wavefront propagation through X-ray transport optics, performed using SRW code, is presented.

  3. NPTFit: A Code Package for Non-Poissonian Template Fitting

    NASA Astrophysics Data System (ADS)

    Mishra-Sharma, Siddharth; Rodd, Nicholas L.; Safdi, Benjamin R.

    2017-06-01

    We present NPTFit, an open-source code package, written in Python and Cython, for performing non-Poissonian template fits (NPTFs). The NPTF is a recently developed statistical procedure for characterizing the contribution of unresolved point sources (PSs) to astrophysical data sets. The NPTF was first applied to Fermi gamma-ray data to provide evidence that the excess of ˜GeV gamma-rays observed in the inner regions of the Milky Way likely arises from a population of sub-threshold point sources, and the NPTF has since found additional applications studying sub-threshold extragalactic sources at high Galactic latitudes. The NPTF generalizes traditional astrophysical template fits to allow for the ability to search for populations of unresolved PSs that may follow a given spatial distribution. NPTFit builds upon the framework of the fluctuation analyses developed in X-ray astronomy, thus it likely has applications beyond those demonstrated with gamma-ray data. The NPTFit package utilizes novel computational methods to perform the NPTF efficiently. The code is available at http://github.com/bsafdi/NPTFit and up-to-date and extensive documentation may be found at http://nptfit.readthedocs.io.

  4. NPTFit: A Code Package for Non-Poissonian Template Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra-Sharma, Siddharth; Rodd, Nicholas L.; Safdi, Benjamin R., E-mail: smsharma@princeton.edu, E-mail: nrodd@mit.edu, E-mail: bsafdi@mit.edu

    We present NPTFit, an open-source code package, written in Python and Cython, for performing non-Poissonian template fits (NPTFs). The NPTF is a recently developed statistical procedure for characterizing the contribution of unresolved point sources (PSs) to astrophysical data sets. The NPTF was first applied to Fermi gamma-ray data to provide evidence that the excess of ∼GeV gamma-rays observed in the inner regions of the Milky Way likely arises from a population of sub-threshold point sources, and the NPTF has since found additional applications studying sub-threshold extragalactic sources at high Galactic latitudes. The NPTF generalizes traditional astrophysical template fits to allowmore » for the ability to search for populations of unresolved PSs that may follow a given spatial distribution. NPTFit builds upon the framework of the fluctuation analyses developed in X-ray astronomy, thus it likely has applications beyond those demonstrated with gamma-ray data. The NPTFit package utilizes novel computational methods to perform the NPTF efficiently. The code is available at http://github.com/bsafdi/NPTFit and up-to-date and extensive documentation may be found at http://nptfit.readthedocs.io.« less

  5. Verification of unfold error estimates in the UFO code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less

  6. A multidisciplinary approach to vascular surgery procedure coding improves coding accuracy, work relative value unit assignment, and reimbursement.

    PubMed

    Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres

    2016-08-01

    Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes

  7. Reduction of PAPR in coded OFDM using fast Reed-Solomon codes over prime Galois fields

    NASA Astrophysics Data System (ADS)

    Motazedi, Mohammad Reza; Dianat, Reza

    2017-02-01

    In this work, two new techniques using Reed-Solomon (RS) codes over GF(257) and GF(65,537) are proposed for peak-to-average power ratio (PAPR) reduction in coded orthogonal frequency division multiplexing (OFDM) systems. The lengths of these codes are well-matched to the length of OFDM frames. Over these fields, the block lengths of codes are powers of two and we fully exploit the radix-2 fast Fourier transform algorithms. Multiplications and additions are simple modulus operations. These codes provide desirable randomness with a small perturbation in information symbols that is essential for generation of different statistically independent candidates. Our simulations show that the PAPR reduction ability of RS codes is the same as that of conventional selected mapping (SLM), but contrary to SLM, we can get error correction capability. Also for the second proposed technique, the transmission of side information is not needed. To the best of our knowledge, this is the first work using RS codes for PAPR reduction in single-input single-output systems.

  8. The Ftx Noncoding Locus Controls X Chromosome Inactivation Independently of Its RNA Products.

    PubMed

    Furlan, Giulia; Gutierrez Hernandez, Nancy; Huret, Christophe; Galupa, Rafael; van Bemmel, Joke Gerarda; Romito, Antonio; Heard, Edith; Morey, Céline; Rougeulle, Claire

    2018-05-03

    Accumulation of the Xist long noncoding RNA (lncRNA) on one X chromosome is the trigger for X chromosome inactivation (XCI) in female mammals. Xist expression, which needs to be tightly controlled, involves a cis-acting region, the X-inactivation center (Xic), containing many lncRNA genes that evolved concomitantly to Xist from protein-coding ancestors through pseudogeneization and loss of coding potential. Here, we uncover an essential role for the Xic-linked noncoding gene Ftx in the regulation of Xist expression. We show that Ftx is required in cis to promote Xist transcriptional activation and establishment of XCI. Importantly, we demonstrate that this function depends on Ftx transcription and not on the RNA products. Our findings illustrate the multiplicity of layers operating in the establishment of XCI and highlight the diversity in the modus operandi of the noncoding players. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. The STAGS computer code

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Brogan, F. A.

    1978-01-01

    Basic information about the computer code STAGS (Structural Analysis of General Shells) is presented to describe to potential users the scope of the code and the solution procedures that are incorporated. Primarily, STAGS is intended for analysis of shell structures, although it has been extended to more complex shell configurations through the inclusion of springs and beam elements. The formulation is based on a variational approach in combination with local two dimensional power series representations of the displacement components. The computer code includes options for analysis of linear or nonlinear static stress, stability, vibrations, and transient response. Material as well as geometric nonlinearities are included. A few examples of applications of the code are presented for further illustration of its scope.

  10. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  11. Contact radiotherapy using a 50 kV X-ray system: Evaluation of relative dose distribution with the Monte Carlo code PENELOPE and comparison with measurements

    NASA Astrophysics Data System (ADS)

    Croce, Olivier; Hachem, Sabet; Franchisseur, Eric; Marcié, Serge; Gérard, Jean-Pierre; Bordy, Jean-Marc

    2012-06-01

    This paper presents a dosimetric study concerning the system named "Papillon 50" used in the department of radiotherapy of the Centre Antoine-Lacassagne, Nice, France. The machine provides a 50 kVp X-ray beam, currently used to treat rectal cancers. The system can be mounted with various applicators of different diameters or shapes. These applicators can be fixed over the main rod tube of the unit in order to deliver the prescribed absorbed dose into the tumor with an optimal distribution. We have analyzed depth dose curves and dose profiles for the naked tube and for a set of three applicators. Dose measurements were made with an ionization chamber (PTW type 23342) and Gafchromic films (EBT2). We have also compared the measurements with simulations performed using the Monte Carlo code PENELOPE. Simulations were performed with a detailed geometrical description of the experimental setup and with enough statistics. Results of simulations are made in accordance with experimental measurements and provide an accurate evaluation of the dose delivered. The depths of the 50% isodose in water for the various applicators are 4.0, 6.0, 6.6 and 7.1 mm. The Monte Carlo PENELOPE simulations are in accordance with the measurements for a 50 kV X-ray system. Simulations are able to confirm the measurements provided by Gafchromic films or ionization chambers. Results also demonstrate that Monte Carlo simulations could be helpful to validate the future applicators designed for other localizations such as breast or skin cancers. Furthermore, Monte Carlo simulations could be a reliable alternative for a rapid evaluation of the dose delivered by such a system that uses multiple designs of applicators.

  12. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  13. Superimposed Code Theoretic Analysis of DNA Codes and DNA Computing

    DTIC Science & Technology

    2008-01-01

    complements of one another and the DNA duplex formed is a Watson - Crick (WC) duplex. However, there are many instances when the formation of non-WC...that the user’s requirements for probe selection are met based on the Watson - Crick probe locality within a target. The second type, called...AFRL-RI-RS-TR-2007-288 Final Technical Report January 2008 SUPERIMPOSED CODE THEORETIC ANALYSIS OF DNA CODES AND DNA COMPUTING

  14. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Velkov, K.; Lizorkin, M.

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  15. Germ-line and somatic EPHA2 coding variants in lens aging and cataract.

    PubMed

    Bennett, Thomas M; M'Hamdi, Oussama; Hejtmancik, J Fielding; Shiels, Alan

    2017-01-01

    Rare germ-line mutations in the coding regions of the human EPHA2 gene (EPHA2) have been associated with inherited forms of pediatric cataract, whereas, frequent, non-coding, single nucleotide variants (SNVs) have been associated with age-related cataract. Here we sought to determine if germ-line EPHA2 coding SNVs were associated with age-related cataract in a case-control DNA panel (> 50 years) and if somatic EPHA2 coding SNVs were associated with lens aging and/or cataract in a post-mortem lens DNA panel (> 48 years). Micro-fluidic PCR amplification followed by targeted amplicon (exon) next-generation (deep) sequencing of EPHA2 (17-exons) afforded high read-depth coverage (1000x) for > 82% of reads in the cataract case-control panel (161 cases, 64 controls) and > 70% of reads in the post-mortem lens panel (35 clear lens pairs, 22 cataract lens pairs). Novel and reference (known) missense SNVs in EPHA2 that were predicted in silico to be functionally damaging were found in both cases and controls from the age-related cataract panel at variant allele frequencies (VAFs) consistent with germ-line transmission (VAF > 20%). Similarly, both novel and reference missense SNVs in EPHA2 were found in the post-mortem lens panel at VAFs consistent with a somatic origin (VAF > 3%). The majority of SNVs found in the cataract case-control panel and post-mortem lens panel were transitions and many occurred at di-pyrimidine sites that are susceptible to ultraviolet (UV) radiation induced mutation. These data suggest that novel germ-line (blood) and somatic (lens) coding SNVs in EPHA2 that are predicted to be functionally deleterious occur in adults over 50 years of age. However, both types of EPHA2 coding variants were present at comparable levels in individuals with or without age-related cataract making simple genotype-phenotype correlations inconclusive.

  16. Germ-line and somatic EPHA2 coding variants in lens aging and cataract

    PubMed Central

    Bennett, Thomas M.; M’Hamdi, Oussama; Hejtmancik, J. Fielding

    2017-01-01

    Rare germ-line mutations in the coding regions of the human EPHA2 gene (EPHA2) have been associated with inherited forms of pediatric cataract, whereas, frequent, non-coding, single nucleotide variants (SNVs) have been associated with age-related cataract. Here we sought to determine if germ-line EPHA2 coding SNVs were associated with age-related cataract in a case-control DNA panel (> 50 years) and if somatic EPHA2 coding SNVs were associated with lens aging and/or cataract in a post-mortem lens DNA panel (> 48 years). Micro-fluidic PCR amplification followed by targeted amplicon (exon) next-generation (deep) sequencing of EPHA2 (17-exons) afforded high read-depth coverage (1000x) for > 82% of reads in the cataract case-control panel (161 cases, 64 controls) and > 70% of reads in the post-mortem lens panel (35 clear lens pairs, 22 cataract lens pairs). Novel and reference (known) missense SNVs in EPHA2 that were predicted in silico to be functionally damaging were found in both cases and controls from the age-related cataract panel at variant allele frequencies (VAFs) consistent with germ-line transmission (VAF > 20%). Similarly, both novel and reference missense SNVs in EPHA2 were found in the post-mortem lens panel at VAFs consistent with a somatic origin (VAF > 3%). The majority of SNVs found in the cataract case-control panel and post-mortem lens panel were transitions and many occurred at di-pyrimidine sites that are susceptible to ultraviolet (UV) radiation induced mutation. These data suggest that novel germ-line (blood) and somatic (lens) coding SNVs in EPHA2 that are predicted to be functionally deleterious occur in adults over 50 years of age. However, both types of EPHA2 coding variants were present at comparable levels in individuals with or without age-related cataract making simple genotype-phenotype correlations inconclusive. PMID:29267365

  17. The investigation of topological phase of Gd1-xYxAuPb (x = 0, 0.25, 0.5, 0.75, 1) alloys under hydrostatic pressure

    NASA Astrophysics Data System (ADS)

    Saeidi, Parviz; Nourbakhsh, Zahra

    2018-04-01

    Topological phase of Gd1-xYxAuPb (x = 0, 0.25, 0.5, 0.75, 1) alloys have been studied utilizing density function theory by WIEN2k code. The generalized gradient approximation (GGA), generalized gradient approximation plus Hubbard parameter (GGA + U), Modified Becke and Johnson (MBJ) and GGA Engel-vosko in the presence of spin orbit coupling have been used to investigate the topological band structure of Gd1-xYxAuPb alloys at zero pressure. The topological phase and band order of these alloys within GGA and GGA + U approaches under hydrostatic pressure are also investigated. We find that under hydrostatic pressure in some percentages of Gd1-xYxAuPb (x = 0, 0.25, 0.5, 0.75, 1) alloys in both GGA and GGA + U approaches, the trivial topological phase is converted into nontrivial topological phase. In addition, the band inversion strength versus lattice constant of these alloys is studied. Moreover, the schematic plan is represented in order to show the trivial and nontrivial topological phase of Gd1-xYxAuPb (x = 0, 0.25, 0.5, 0.75, 1) alloys in both GGA and GGA + U approaches.

  18. Securing mobile code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas

    2004-10-01

    If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware ismore » necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and

  19. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  20. Design criteria for small coded aperture masks in gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Sembay, S.; Gehrels, Neil

    1990-01-01

    Most theoretical work on coded aperture masks in X-ray and low-energy gamma-ray astronomy has concentrated on masks with large numbers of elements. For gamma-ray spectrometers in the MeV range, the detector plane usually has only a few discrete elements, so that masks with small numbers of elements are called for. For this case it is feasible to analyze by computer all the possible mask patterns of given dimension to find the ones that best satisfy the desired performance criteria. A particular set of performance criteria for comparing the flux sensitivities, source positioning accuracies and transparencies of different mask patterns is developed. The results of such a computer analysis for masks up to dimension 5 x 5 unit cell are presented and it is concluded that there is a great deal of flexibility in the choice of mask pattern for each dimension.