Science.gov

Sample records for a codes

  1. Combinatorial neural codes from a mathematical coding theory perspective.

    PubMed

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  2. A class of constacyclic BCH codes and new quantum codes

    NASA Astrophysics Data System (ADS)

    liu, Yang; Li, Ruihu; Lv, Liangdong; Ma, Yuena

    2017-03-01

    Constacyclic BCH codes have been widely studied in the literature and have been used to construct quantum codes in latest years. However, for the class of quantum codes of length n=q^{2m}+1 over F_{q^2} with q an odd prime power, there are only the ones of distance δ ≤ 2q^2 are obtained in the literature. In this paper, by a detailed analysis of properties of q2-ary cyclotomic cosets, maximum designed distance δ _{max} of a class of Hermitian dual-containing constacyclic BCH codes with length n=q^{2m}+1 are determined, this class of constacyclic codes has some characteristic analog to that of primitive BCH codes over F_{q^2}. Then we can obtain a sequence of dual-containing constacyclic codes of designed distances 2q^2<δ ≤ δ _{max}. Consequently, new quantum codes with distance d > 2q^2 can be constructed from these dual-containing codes via Hermitian Construction. These newly obtained quantum codes have better code rate compared with those constructed from primitive BCH codes.

  3. A coded tracking telemetry system

    USGS Publications Warehouse

    Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.; Amlaner, Charles J.

    1989-01-01

    We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.

  4. AEDS Property Classification Code Manual.

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…

  5. A secure and efficient entropy coding based on arithmetic coding

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Zhang, Jiashu

    2009-12-01

    A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) and its coefficients are derived from the plaintext for higher security. During the encryption process, the mapping interval in each iteration of arithmetic coding (AC) is decided by both the plaintext and the initial values of NDF, and the data compression is also achieved with entropy optimality simultaneously. And this modification of arithmetic coding methodology which also provides security is easy to be expanded into the most international image and video standards as the last entropy coding stage without changing the existing framework. Theoretic analysis and numerical simulations both on static and adaptive model show that the proposed encryption algorithm satisfies highly security without loss of compression efficiency respect to a standard AC or computation burden.

  6. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code.

  7. HADES, A Radiographic Simulation Code

    SciTech Connect

    Aufderheide, M.B.; Slone, D.M.; Schach von Wittenau, A.E.

    2000-08-18

    We describe features of the HADES radiographic simulation code. We begin with a discussion of why it is useful to simulate transmission radiography. The capabilities of HADES are described, followed by an application of HADES to a dynamic experiment recently performed at the Los Alamos Neutron Science Center. We describe quantitative comparisons between experimental data and HADES simulations using a copper step wedge. We conclude with a short discussion of future work planned for HADES.

  8. The Proteomic Code: a molecular recognition code for proteins

    PubMed Central

    Biro, Jan C

    2007-01-01

    Background The Proteomic Code is a set of rules by which information in genetic material is transferred into the physico-chemical properties of amino acids. It determines how individual amino acids interact with each other during folding and in specific protein-protein interactions. The Proteomic Code is part of the redundant Genetic Code. Review The 25-year-old history of this concept is reviewed from the first independent suggestions by Biro and Mekler, through the works of Blalock, Root-Bernstein, Siemion, Miller and others, followed by the discovery of a Common Periodic Table of Codons and Nucleic Acids in 2003 and culminating in the recent conceptualization of partial complementary coding of interacting amino acids as well as the theory of the nucleic acid-assisted protein folding. Methods and conclusions A novel cloning method for the design and production of specific, high-affinity-reacting proteins (SHARP) is presented. This method is based on the concept of proteomic codes and is suitable for large-scale, industrial production of specifically interacting peptides. PMID:17999762

  9. SLINGSHOT - a Coilgun Design Code

    SciTech Connect

    MARDER, BARRY M.

    2001-09-01

    The Sandia coilgun [1,2,3,4,5] is an inductive electromagnetic launcher. It consists of a sequence of powered, multi-turn coils surrounding a flyway of circular cross-section through which a conducting armature passes. When the armature is properly positioned with respect to a coil, a charged capacitor is switched into the coil circuit. The rising coil currents induce a current in the armature, producing a repulsive accelerating force. The basic numerical tool for modeling the coilgun is the SLINGSHOT code, an expanded, user-friendly successor to WARP-10 [6]. SLINGSHOT computes the currents in the coils and armature, finds the forces produced by those currents, and moves the armature through the array of coils. In this approach, the cylindrically symmetric coils and armature are subdivided into concentric hoops with rectangular cross-section, in each of which the current is assumed to be uniform. The ensemble of hoops are treated as coupled circuits. The specific heats and resistivities of the hoops are found as functions of temperature and used to determine the resistive heating. The code calculates the resistances and inductances for all hoops, and the mutual inductances for all hoop pairs. Using these, it computes the hoop currents from their circuit equations, finds the forces from the products of these currents and the mutual inductance gradient, and moves the armature. Treating the problem as a set of coupled circuits is a fast and accurate approach compared to solving the field equations. Its use, however, is restricted to problems in which the symmetry dictates the current paths. This paper is divided into three parts. The first presents a demonstration of the code. The second describes the input and output. The third part describes the physical models and numerical methods used in the code. It is assumed that the reader is familiar with coilguns.

  10. A Pseudorandom Code Modulated LIDAR

    NASA Astrophysics Data System (ADS)

    Hunt, K. P.; Eichinger, W. E.; Kruger, A.

    2009-12-01

    Typical Light Detection and Ranging (LIDAR) uses high power pulsed lasers to ensure a detectable return signal. For short ranges, modulated diode lasers offer an attractive alternative, particularly in the areas of size, weight, cost, eye safety and use of energy. Flexible electronic modulation of the laser diode allows the development of pseudorandom code (PRC) LIDAR systems that can overcome the disadvantage of low output power and thus low signal to noise ratios. Different PRCs have been proposed. For example, so called M-sequences can be generated simply, but are unbalanced: they have more ones than zeros, which results in a residual noise component. Other sequences such as the A1 and A2 sequences are balanced, but have two autocorrelation peaks, resulting in undesirable pickup of signals from different ranges. In this work, we investigate a new code, an M-sequence with a zero added at the end. The result is still easily generated and has a single autocorrelation peak, but is now balanced. We loaded these sequences into a commercial arbitrary waveform generator (ARB), an Agilent 33250A, which then modulates the laser diode. This allows sequences to be changed quickly and easily, permitting us to design and investigate a wide range of PRC sequences with desirable properties. The ARB modulates a Melles Griot 56ICS near infrared laser diode at a 10 MHz chip rate. Backscatter is collected and focused by a telescope and the detected signal is sampled and correlated with the known PRC. We have gathered data from this LIDAR system and experimentally assessed the performance of this new class of codes.

  11. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  12. The chromatin regulatory code: Beyond a histone code

    NASA Astrophysics Data System (ADS)

    Lesne, A.

    2006-03-01

    In this commentary on the contribution by Arndt Benecke in this issue, I discuss why the notion of “chromatin code” introduced and elaborated in this paper is to be preferred to that of “histone code”. Speaking of a code as regards nucleosome conformation and histone tail post-translational modifications only makes sense within the chromatin fiber, where their physico-chemical features can be translated into regulatory programs at the genome level, by means of a complex, multi-level interplay with the fiber architecture and dynamics settled in the course of Evolution. In particular, this chromatin code presumably exploits allosteric transitions of the chromatin fiber. The chromatin structure dependence of its translation suggests two alternative modes of transcription initiation regulation, also proposed in the paper by A. Benecke in this issue for interpreting strikingly bimodal micro-array data.

  13. A Better Handoff for Code Officials

    SciTech Connect

    Conover, David R.; Yerkes, Sara

    2010-09-24

    The U.S. Department of Energy's Building Energy Codes Program has partnered with ICC to release the new Building Energy Codes Resource Guide: Code Officials Edition. We created this binder of practical materials for a simple reason: code officials are busy learning and enforcing several codes at once for the diverse buildings across their jurisdictions. This doesn’t leave much time to search www.energycodes.gov, www.iccsafe.org, or the range of other helpful web-based resources for the latest energy codes tools, support, and information. So, we decided to bring the most relevant materials to code officials in a way that works best with their daily routine, and point to where they can find even more. Like a coach’s game plan, the Resource Guide is an "energy playbook" for code officials.

  14. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  15. Efficiency of a model human image code

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1987-01-01

    Hypothetical schemes for neural representation of visual information can be expressed as explicit image codes. Here, a code modeled on the simple cells of the primate striate cortex is explored. The Cortex transform maps a digital image into a set of subimages (layers) that are bandpass in spatial frequency and orientation. The layers are sampled so as to minimize the number of samples and still avoid aliasing. Samples are quantized in a manner that exploits the bandpass contrast-masking properties of human vision. The entropy of the samples is computed to provide a lower bound on the code size. Finally, the image is reconstructed from the code. Psychophysical methods are derived for comparing the original and reconstructed images to evaluate the sufficiency of the code. When each resolution is coded at the threshold for detection artifacts, the image-code size is about 1 bit/pixel.

  16. A review of predictive coding algorithms.

    PubMed

    Spratling, M W

    2017-03-01

    Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology.

  17. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Fujiwara, T.; Lin, S.

    1986-01-01

    In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.

  18. DKPRO: A radionuclide decay and reprocessing code

    SciTech Connect

    Wootan, D.; Schmittroth, F.A.

    1997-07-14

    The DKPRO code solves the general problem of modeling complex nuclear wastes streams using ORIGEN2 radionuclide production files. There is a continuing need for estimates of Hanford radionuclides. Physical measurements are one basis; calculational estimates, the approach represented here, are another. Given a known nuclear fuel history, it is relatively straightforward to calculate radionuclide inventories with codes such as the widely-used Oak Ridge National Laboratory code ORIGEN2.

  19. Fallout Computer Codes. A Bibliographic Perspective

    DTIC Science & Technology

    1994-07-01

    of time. The model calculates g(t) by assuming that fallout descends from a nuclear cloud that is characterized initially by a Gaussian distribution in...features and differences among the major radioactive fallout models and computer codes that are either in current use or that form the basis for more...contemporary codes and other computational tools. The DELFIC, WSEG-10, KDFOC2, SEER3, and DNAF-1 codes and the EM-I model are addressed. The review is

  20. HERCULES: A Pattern Driven Code Transformation System

    SciTech Connect

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing; Ilsche, Thomas; Joubert, Wayne; Graham, Richard L

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss the design, implementation and an initial evaluation of HERCULES.

  1. Towards a testbed for malicious code detection

    SciTech Connect

    Lo, R.; Kerchen, P.; Crawford, R.; Ho, W.; Crossley, J.; Fink, G.; Levitt, K.; Olsson, R.; Archer, M. . Div. of Computer Science)

    1991-01-01

    This paper proposes an environment for detecting many types of malicious code, including computer viruses, Trojan horses, and time/logic bombs. This malicious code testbed (MCT) is based upon both static and dynamic analysis tools developed at the University of California, Davis, which have been shown to be effective against certain types of malicious code. The testbed extends the usefulness of these tools by using them in a complementary fashion to detect more general cases of malicious code. Perhaps more importantly, the MCT allows administrators and security analysts to check a program before installation, thereby avoiding any damage a malicious program might inflict. 5 refs., 2 figs., 2 tabs.

  2. Toward a Code of Ethics for Academics.

    ERIC Educational Resources Information Center

    Schurr, George M.

    Considerations regarding the establishment of a code of ethics for academics are discussed. All professions, not just academicians, are being asked to specify what it is that they contribute to society and to demonstrate that they are contributing it and not just satisfying their own interests. A code explicitly delimits the responsibility and…

  3. Source Code Plagiarism--A Student Perspective

    ERIC Educational Resources Information Center

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  4. MHDust: A 3-fluid dusty plasma code

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel

    MHDust is a next generation 3-fluid magnetized dusty plasma code, treating the inertial dynamics of both the dust and ion components. Coded in ANSI C, the numerical method employs Leap-Frog and Dufort-Frankel integration schemes. Features include: nonlinear collisional terms, quasi-neutrality or continuity based electron densities, and dynamical dust charge number. Tests of wave-mode propagation (Acoustic and Electromagnetic) allow a comparison to linear wave mode theory. Additional nonlinear phenomena are presented including magnetic reconnection and shear-flow instabilities. Relevant parameters for the space environment are considered, allowing a comparison to be made with previous dusty plasma codes (DENISIS). The utility of the code is expanded through the possibility of small dust mass. This allows MH- Dust to be used as a 2-ion plasma code. MHDust considerably expands the range of numerical investigations into nonlinear phenomena in the field of astrophysical dusty plasmas.

  5. The Nuremberg Code-A critique.

    PubMed

    Ghooi, Ravindra B

    2011-04-01

    The Nuremberg Code drafted at the end of the Doctor's trial in Nuremberg 1947 has been hailed as a landmark document in medical and research ethics. Close examination of this code reveals that it was based on the Guidelines for Human Experimentation of 1931. The resemblance between these documents is uncanny. It is unfortunate that the authors of the Nuremberg Code passed it off as their original work. There is evidence that the defendants at the trial did request that their actions be judged on the basis of the 1931 Guidelines, in force in Germany. The prosecutors, however, ignored the request and tried the defendants for crimes against humanity, and the judges included the Nuremberg Code as a part of the judgment. Six of ten principles in Nuremberg Code are derived from the 1931 Guidelines, and two of four newly inserted principles are open to misinterpretation. There is little doubt that the Code was prepared after studying the Guidelines, but no reference was made to the Guidelines, for reasons that are not known. Using the Guidelines as a base document without giving due credit is plagiarism; as per our understanding of ethics today, this would be considered unethical. The Nuremberg Code has fallen by the wayside; since unlike the Declaration of Helsinki, it is not regularly reviewed and updated. The regular updating of some ethics codes is evidence of the evolving nature of human ethics.

  6. Do plant cell walls have a code?

    PubMed

    Tavares, Eveline Q P; Buckeridge, Marcos S

    2015-12-01

    A code is a set of rules that establish correspondence between two worlds, signs (consisting of encrypted information) and meaning (of the decrypted message). A third element, the adaptor, connects both worlds, assigning meaning to a code. We propose that a Glycomic Code exists in plant cell walls where signs are represented by monosaccharides and phenylpropanoids and meaning is cell wall architecture with its highly complex association of polymers. Cell wall biosynthetic mechanisms, structure, architecture and properties are addressed according to Code Biology perspective, focusing on how they oppose to cell wall deconstruction. Cell wall hydrolysis is mainly focused as a mechanism of decryption of the Glycomic Code. Evidence for encoded information in cell wall polymers fine structure is highlighted and the implications of the existence of the Glycomic Code are discussed. Aspects related to fine structure are responsible for polysaccharide packing and polymer-polymer interactions, affecting the final cell wall architecture. The question whether polymers assembly within a wall display similar properties as other biological macromolecules (i.e. proteins, DNA, histones) is addressed, i.e. do they display a code?

  7. Report on a workshop concerning code validation

    SciTech Connect

    1996-12-01

    The design of wind turbine components is becoming more critical as turbines become lighter and more dynamically active. Computer codes that will reliably predict turbine dynamic response are, therefore, more necessary than before. However, predicting the dynamic response of very slender rotating structures that operate in turbulent winds is not a simple matter. Even so, codes for this purpose have been developed and tested in North America and in Europe, and it is important to disseminate information on this subject. The purpose of this workshop was to allow those involved in the wind energy industry in the US to assess the progress invalidation of the codes most commonly used for structural/aero-elastic wind turbine simulation. The theme of the workshop was, ``How do we know it`s right``? This was the question that participants were encouraged to ask themselves throughout the meeting in order to avoid the temptation of presenting information in a less-than-critical atmosphere. Other questions posed at the meeting are: What is the proof that the codes used can truthfully represent the field data? At what steps were the codes tested against known solutions, or against reliable field data? How should the designer or user validate results? What computer resources are needed? How do codes being used in Europe compare with those used in the US? How does the code used affect industry certification? What can be expected in the future?

  8. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  9. A Subband Coding Method for HDTV

    NASA Technical Reports Server (NTRS)

    Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.

  10. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Shu, L.; Kasami, T.

    1985-01-01

    A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.

  11. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  12. MACRAD: A mass analysis code for radiators

    SciTech Connect

    Gallup, D.R.

    1988-01-01

    A computer code to estimate and optimize the mass of heat pipe radiators (MACRAD) is currently under development. A parametric approach is used in MACRAD, which allows the user to optimize radiator mass based on heat pipe length, length to diameter ratio, vapor to wick radius, radiator redundancy, etc. Full consideration of the heat pipe operating parameters, material properties, and shielding requirements is included in the code. Preliminary results obtained with MACRAD are discussed.

  13. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  14. On a Mathematical Theory of Coded Exposure

    DTIC Science & Technology

    2014-08-01

    Coded exposure, computational photography , flutter shutter, motion blur, mean square error (MSE), signal to noise ratio (SNR). 1 Introduction Since the...photon emission µ doubles then the SNR is multiplied by a factor ? 2. (And we retrieve the fundamental theorem of photography .) Note that if we have no...deduce that the SNR evolves proportionally to ? µ and we retrieve the fundamental theorem of photography . We now turn to the optimization of the coded

  15. A thesaurus for a neural population code

    PubMed Central

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-01-01

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI: http://dx.doi.org/10.7554/eLife.06134.001 PMID:26347983

  16. A comparison of cosmological hydrodynamic codes

    NASA Technical Reports Server (NTRS)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic

  17. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  18. DISH CODE A deeply simplified hydrodynamic code for applications to warm dense matter

    SciTech Connect

    More, Richard

    2007-08-22

    DISH is a 1-dimensional (planar) Lagrangian hydrodynamic code intended for application to experiments on warm dense matter. The code is a simplified version of the DPC code written in the Data and Planning Center of the National Institute for Fusion Science in Toki, Japan. DPC was originally intended as a testbed for exploring equation of state and opacity models, but turned out to have a variety of applications. The Dish code is a "deeply simplified hydrodynamic" code, deliberately made as simple as possible. It is intended to be easy to understand, easy to use and easy to change.

  19. FREEFALL: A seabed penetrator flight code

    SciTech Connect

    Hickerson, J.

    1988-01-01

    This report presents a one-dimensional model and computer program for predicting the motion of seabed penetrators. The program calculates the acceleration, velocity, and depth of a penetrator as a function of time from the moment of launch until the vehicle comes to rest in the sediment. The code is written in Pascal language for use on a small personal computer. Results are presented as printed tables and graphs. A comparison with experimental data is given which indicates that the accuracy of the code is perhaps as good as current techniques for measuring vehicle performance. 31 refs., 12 figs., 5 tabs.

  20. DUNE - a granular flow code

    SciTech Connect

    Slone, D M; Cottom, T L; Bateson, W B

    2004-11-23

    DUNE was designed to accurately model the spectrum of granular. Granular flow encompasses the motions of discrete particles. The particles are macroscopic in that there is no Brownian motion. The flow can be thought of as a dispersed phase (the particles) interacting with a fluid phase (air or water). Validation of the physical models proceeds in tandem with simple experimental confirmation. The current development team is working toward the goal of building a flexible architecture where existing technologies can easily be integrated to further the capability of the simulation. We describe the DUNE architecture in some detail using physics models appropriate for an imploding liner experiment.

  1. Should managers have a code of conduct?

    PubMed

    Bayliss, P

    1994-02-01

    Much attention is currently being given to values and ethics in the NHS. Issues of accountability are being explored as a consequence of the Cadbury report. The Institute of Health Services Management (IHSM) is considering whether managers should have a code of ethics. Central to this issue is what managers themselves think; the application of such a code may well stand or fall by whether managers are prepared to have ownership of it, and are prepared to make it work. Paul Bayliss reports on a survey of managers' views.

  2. Implementing a modular system of computer codes

    SciTech Connect

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out.

  3. Vectorization of a multiprocessor multifrontal code

    SciTech Connect

    Amestoy, P.R. ); Duff, I.S. )

    1989-01-01

    The authors describe design changes that enhance the vectorization of a multiprocessor version of a multifrontal code for the direct solution of large sparse sets of linear equations. These changes employ techniques used with success in full Gaussian elimination and are based on the use of matrix-vector and matrix-matrix kernels as implemented in the Level 2 and Level 3 BLAS. They illustrate the performance of the improved code by runs on the IBM 3090/VF, the ETA-10P, and the CRAY-2. Although their experiments are principally on a single processor of these machines, they briefly consider the influence of multiprocessing. Speedup factors of more than 11 are obtained, and the modified code performs at over 200 MFLOPS on standard structures problems on one processor of the CRAY-2.

  4. A MULTIPURPOSE COHERENT INSTABILITY SIMULATION CODE

    SciTech Connect

    BLASKIEWICZ,M.

    2007-06-25

    A multipurpose coherent instability simulation code has been written, documented, and released for use. TRANFT (tran-eff-tee) uses fast Fourier transforms to model transverse wakefields, transverse detuning wakes and longitudinal wakefields in a computationally efficient way. Dual harmonic RF allows for the study of enhanced synchrotron frequency spread. When coupled with chromaticity, the theoretically challenging but highly practical post head-tail regime is open to study. Detuning wakes allow for transverse space charge forces in low energy hadron beams, and a switch allowing for radiation damping makes the code useful for electrons.

  5. A new class of polyphase pulse compression codes

    NASA Astrophysics Data System (ADS)

    Deng, Hai; Lin, Maoyong

    The study presents the synthesis method of a new class of polyphase pulse compression codes - NLFM code, and investigates the properties of this code. The NLFM code, which is derived from sampling and quantization of a nonlinear FM waveform, features a low-range sidelobe and insensitivity to Doppler effect. Simulation results show that the major properties of the NLFM polyphase code are superior to the Frank code.

  6. Finding the Key to a Better Code: Code Team Restructure to Improve Performance and Outcomes

    PubMed Central

    Prince, Cynthia R.; Hines, Elizabeth J.; Chyou, Po-Huang; Heegeman, David J.

    2014-01-01

    Code teams respond to acute life threatening changes in a patient’s status 24 hours a day, 7 days a week. If any variable, whether a medical skill or non-medical quality, is lacking, the effectiveness of a code team’s resuscitation could be hindered. To improve the overall performance of our hospital’s code team, we implemented an evidence-based quality improvement restructuring plan. The code team restructure, which occurred over a 3-month period, included a defined number of code team participants, clear identification of team members and their primary responsibilities and position relative to the patient, and initiation of team training events and surprise mock codes (simulations). Team member assessments of the restructured code team and its performance were collected through self-administered electronic questionnaires. Time-to-defibrillation, defined as the time the code was called until the start of defibrillation, was measured for each code using actual time recordings from code summary sheets. Significant improvements in team member confidence in the skills specific to their role and clarity in their role’s position were identified. Smaller improvements were seen in team leadership and reduction in the amount of extra talking and noise during a code. The average time-to-defibrillation during real codes decreased each year since the code team restructure. This type of code team restructure resulted in improvements in several areas that impact the functioning of the team, as well as decreased the average time-to-defibrillation, making it beneficial to many, including the team members, medical institution, and patients. PMID:24667218

  7. Finding the key to a better code: code team restructure to improve performance and outcomes.

    PubMed

    Prince, Cynthia R; Hines, Elizabeth J; Chyou, Po-Huang; Heegeman, David J

    2014-09-01

    Code teams respond to acute life threatening changes in a patient's status 24 hours a day, 7 days a week. If any variable, whether a medical skill or non-medical quality, is lacking, the effectiveness of a code team's resuscitation could be hindered. To improve the overall performance of our hospital's code team, we implemented an evidence-based quality improvement restructuring plan. The code team restructure, which occurred over a 3-month period, included a defined number of code team participants, clear identification of team members and their primary responsibilities and position relative to the patient, and initiation of team training events and surprise mock codes (simulations). Team member assessments of the restructured code team and its performance were collected through self-administered electronic questionnaires. Time-to-defibrillation, defined as the time the code was called until the start of defibrillation, was measured for each code using actual time recordings from code summary sheets. Significant improvements in team member confidence in the skills specific to their role and clarity in their role's position were identified. Smaller improvements were seen in team leadership and reduction in the amount of extra talking and noise during a code. The average time-to-defibrillation during real codes decreased each year since the code team restructure. This type of code team restructure resulted in improvements in several areas that impact the functioning of the team, as well as decreased the average time-to-defibrillation, making it beneficial to many, including the team members, medical institution, and patients.

  8. Code Mixing in a Young Bilingual Child.

    ERIC Educational Resources Information Center

    Anderson, Raquel; Brice, Alejandro

    1999-01-01

    Spontaneous speech samples of a bilingual Spanish-English speaking child were collected during a period of 17 months (ages 6-8). Data revealed percentages and rank ordering of syntactic elements switched in the longitudinal language samples obtained. Specific recommendations for using code mixing in therapy for speech-language pathologists are…

  9. Iterative Decoding of Concatenated Codes: A Tutorial

    NASA Astrophysics Data System (ADS)

    Regalia, Phillip A.

    2005-12-01

    The turbo decoding algorithm of a decade ago constituted a milestone in error-correction coding for digital communications, and has inspired extensions to generalized receiver topologies, including turbo equalization, turbo synchronization, and turbo CDMA, among others. Despite an accrued understanding of iterative decoding over the years, the "turbo principle" remains elusive to master analytically, thereby inciting interest from researchers outside the communications domain. In this spirit, we develop a tutorial presentation of iterative decoding for parallel and serial concatenated codes, in terms hopefully accessible to a broader audience. We motivate iterative decoding as a computationally tractable attempt to approach maximum-likelihood decoding, and characterize fixed points in terms of a "consensus" property between constituent decoders. We review how the decoding algorithm for both parallel and serial concatenated codes coincides with an alternating projection algorithm, which allows one to identify conditions under which the algorithm indeed converges to a maximum-likelihood solution, in terms of particular likelihood functions factoring into the product of their marginals. The presentation emphasizes a common framework applicable to both parallel and serial concatenated codes.

  10. A Code of Ethics for Democratic Leadership

    ERIC Educational Resources Information Center

    Molina, Ricardo; Klinker, JoAnn Franklin

    2012-01-01

    Democratic leadership rests on sacred values, awareness, judgement, motivation and courage. Four turning points in a 38-year school administrator's career revealed decision-making in problematic moments stemmed from values in a personal and professional code of ethics. Reflection on practice and theory added vocabulary and understanding to make…

  11. FLUKA: A Multi-Particle Transport Code

    SciTech Connect

    Ferrari, A.; Sala, P.R.; Fasso, A.; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  12. CHEETAH: A next generation thermochemical code

    SciTech Connect

    Fried, L.; Souers, P.

    1994-11-01

    CHEETAH is an effort to bring the TIGER thermochemical code into the 1990s. A wide variety of improvements have been made in Version 1.0. We have improved the robustness and ease of use of TIGER. All of TIGER`s solvers have been replaced by new algorithms. We find that CHEETAH solves a wider variety of problems with no user intervention (e.g. no guesses for the C-J state) than TIGER did. CHEETAH has been made simpler to use than TIGER; typical use of the code occurs with the new standard run command. CHEETAH will make the use of thermochemical codes more attractive to practical explosive formulators. We have also made an extensive effort to improve over the results of TIGER. CHEETAH`s version of the BKW equation of state (BKWC) is able to accurately reproduce energies from cylinder tests; something that other BKW parameter sets have been unable to do. Calculations performed with BKWC execute very quickly; typical run times are under 10 seconds on a workstation. In the future we plan to improve the underlying science in CHEETAH. More accurate equations of state will be used in the gas and the condensed phase. A kinetics capability will be added to the code that will predict reaction zone thickness. Further ease of use features will eventually be added; an automatic formulator that adjusts concentrations to match desired properties is planned.

  13. CAFE: A NEW RELATIVISTIC MHD CODE

    SciTech Connect

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S. E-mail: aosorio@astro.unam.mx

    2015-06-22

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin–Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin–Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  14. CAFE: A New Relativistic MHD Code

    NASA Astrophysics Data System (ADS)

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.

    2015-06-01

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  15. LEGO: A Modular Accelerator Design Code

    NASA Astrophysics Data System (ADS)

    Cai, Y.; Irwin, J.

    1997-05-01

    An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in three dimensional space. Several symplectic integrators are used to approximate the integration of the local Hamiltonians. A differential algebra class is introduced to extract a Taylor map up to an arbitrary order. Analysis of optics is done in the same way for both the linear and non-linear cases. Currently the code is used to design and simulate the lattices of the PEP-II. It will be used for the commissioning of the machines as well.

  16. Code White: A Signed Code Protection Mechanism for Smartphones

    DTIC Science & Technology

    2010-09-01

    if(TheSuperPage().KernelConfigFlags() & EKernelConfigPlatSecProcessIsolation) { diff -r 2ee5df201f60 kernel/eka/memmodel/ epoc /multiple...mprocess.cpp --- a/kernel/eka/memmodel/ epoc /multiple/mprocess.cpp Mon Mar 08 11:58:34 2010 +0000 +++ b/kernel/eka/memmodel/ epoc /multiple/mprocess.cpp Thu

  17. Xenomicrobiology: a roadmap for genetic code engineering.

    PubMed

    Acevedo-Rocha, Carlos G; Budisa, Nediljko

    2016-09-01

    Biology is an analytical and informational science that is becoming increasingly dependent on chemical synthesis. One example is the high-throughput and low-cost synthesis of DNA, which is a foundation for the research field of synthetic biology (SB). The aim of SB is to provide biotechnological solutions to health, energy and environmental issues as well as unsustainable manufacturing processes in the frame of naturally existing chemical building blocks. Xenobiology (XB) goes a step further by implementing non-natural building blocks in living cells. In this context, genetic code engineering respectively enables the re-design of genes/genomes and proteins/proteomes with non-canonical nucleic (XNAs) and amino (ncAAs) acids. Besides studying information flow and evolutionary innovation in living systems, XB allows the development of new-to-nature therapeutic proteins/peptides, new biocatalysts for potential applications in synthetic organic chemistry and biocontainment strategies for enhanced biosafety. In this perspective, we provide a brief history and evolution of the genetic code in the context of XB. We then discuss the latest efforts and challenges ahead for engineering the genetic code with focus on substitutions and additions of ncAAs as well as standard amino acid reductions. Finally, we present a roadmap for the directed evolution of artificial microbes for emancipating rare sense codons that could be used to introduce novel building blocks. The development of such xenomicroorganisms endowed with a 'genetic firewall' will also allow to study and understand the relation between code evolution and horizontal gene transfer.

  18. ICD9 Code Assistant: A prototype.

    PubMed

    Erdal, Selnur; Ding, Jing; Osborn, Carol; Mekhjian, Hagop; Kamal, Jyoti

    2007-10-11

    At The Ohio State University Medical Center (OSUMC) patient reports are available in real time along with other clinical and financial data in the OSUMC Information Warehouse (IW). Using the UMLS Meta Thesaurus we have leveraged the IW to develop a tool that can assist the medical record coders as well as administrators, physicians and researchers to quickly identify clinical concepts and their associated ICD-9 codes.

  19. A Fast Code for Jupiter Atmospheric Entry

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.; Wercinski, Paul; Yang, Lily; Chen, Yih-Kanq; Arnold, James (Technical Monitor)

    1998-01-01

    A fast code was developed to calculate the forebody heating environment and heat shielding that is required for Jupiter atmospheric entry probes. A carbon phenolic heat shield material was assumed and, since computational efficiency was a major goal, analytic expressions were used, primarily, to calculate the heating, ablation and the required insulation. The code was verified by comparison with flight measurements from the Galileo probe's entry; the calculation required 3.5 sec of CPU time on a work station. The computed surface recessions from ablation were compared with the flight values at six body stations. The average, absolute, predicted difference in the recession was 12.5% too high. The forebody's mass loss was overpredicted by 5.5% and the heat shield mass was calculated to be 15% less than the probe's actual heat shield. However, the calculated heat shield mass did not include contingencies for the various uncertainties that must be considered in the design of probes. Therefore, the agreement with the Galileo probe's values was considered satisfactory, especially in view of the code's fast running time and the methods' approximations.

  20. A Construction of MDS Quantum Convolutional Codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2015-09-01

    In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.

  1. Experiments with a variable-order type 1 DIMSIM code

    NASA Astrophysics Data System (ADS)

    Butcher, J. C.; Chartier, P.; Jackiewicz, Z.

    1999-02-01

    The issues related to the development of a new code for nonstiff ordinary differential equations are discussed. This code is based on the Nordsieck representation of type 1 DIMSIMs, implemented in a variable-step size variable-order mode. Numerical results demonstrate that the error estimation employed in the code is very reliable and that the step and order changing strategies are very robust. This code outperforms the Matlab ode45 code for moderate and stringent tolerances.

  2. Fiber-optic localization by geometric space coding with a two-dimensional gray code

    NASA Astrophysics Data System (ADS)

    Zheng, Yunhui; Brady, David J.; Sullivan, Michaell E.; Guenther, Bob D.

    2005-07-01

    With the objective of monitoring motion within a room, we segment the two-dimensional (2D) floor space into discrete cells and encode each cell with a binary code word generated by a fiber. We design a set of k-neighbor-local codes to localize an extended object and, particularly when k=2, employ a 2D gray code to localize a human by tracking his or her footsteps. Methods for implementing the codes in a fiber web are discussed, and we demonstrate the experimental result with the fiber mat. The observed system performance confirms the theoretical analysis. The space coding technique is a promising low-cost candidate not only for human tracking but also for other applications such as human gait analysis.

  3. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  4. A Review on Spectral Amplitude Coding Optical Code Division Multiple Access

    NASA Astrophysics Data System (ADS)

    Kaur, Navpreet; Goyal, Rakesh; Rani, Monika

    2017-03-01

    This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.

  5. A Readout Mechanism for Latency Codes.

    PubMed

    Zohar, Oran; Shamir, Maoz

    2016-01-01

    Response latency has been suggested as a possible source of information in the central nervous system when fast decisions are required. The accuracy of latency codes was studied in the past using a simplified readout algorithm termed the temporal-winner-take-all (tWTA). The tWTA is a competitive readout algorithm in which populations of neurons with a similar decision preference compete, and the algorithm selects according to the preference of the population that reaches the decision threshold first. It has been shown that this algorithm can account for accurate decisions among a small number of alternatives during short biologically relevant time periods. However, one of the major points of criticism of latency codes has been that it is unclear how can such a readout be implemented by the central nervous system. Here we show that the solution to this long standing puzzle may be rather simple. We suggest a mechanism that is based on reciprocal inhibition architecture, similar to that of the conventional winner-take-all, and show that under a wide range of parameters this mechanism is sufficient to implement the tWTA algorithm. This is done by first analyzing a rate toy model, and demonstrating its ability to discriminate short latency differences between its inputs. We then study the sensitivity of this mechanism to fine-tuning of its initial conditions, and show that it is robust to wide range of noise levels in the initial conditions. These results are then generalized to a Hodgkin-Huxley type of neuron model, using numerical simulations. Latency codes have been criticized for requiring a reliable stimulus-onset detection mechanism as a reference for measuring latency. Here we show that this frequent assumption does not hold, and that, an additional onset estimator is not needed to trigger this simple tWTA mechanism.

  6. A Readout Mechanism for Latency Codes

    PubMed Central

    Zohar, Oran; Shamir, Maoz

    2016-01-01

    Response latency has been suggested as a possible source of information in the central nervous system when fast decisions are required. The accuracy of latency codes was studied in the past using a simplified readout algorithm termed the temporal-winner-take-all (tWTA). The tWTA is a competitive readout algorithm in which populations of neurons with a similar decision preference compete, and the algorithm selects according to the preference of the population that reaches the decision threshold first. It has been shown that this algorithm can account for accurate decisions among a small number of alternatives during short biologically relevant time periods. However, one of the major points of criticism of latency codes has been that it is unclear how can such a readout be implemented by the central nervous system. Here we show that the solution to this long standing puzzle may be rather simple. We suggest a mechanism that is based on reciprocal inhibition architecture, similar to that of the conventional winner-take-all, and show that under a wide range of parameters this mechanism is sufficient to implement the tWTA algorithm. This is done by first analyzing a rate toy model, and demonstrating its ability to discriminate short latency differences between its inputs. We then study the sensitivity of this mechanism to fine-tuning of its initial conditions, and show that it is robust to wide range of noise levels in the initial conditions. These results are then generalized to a Hodgkin-Huxley type of neuron model, using numerical simulations. Latency codes have been criticized for requiring a reliable stimulus-onset detection mechanism as a reference for measuring latency. Here we show that this frequent assumption does not hold, and that, an additional onset estimator is not needed to trigger this simple tWTA mechanism. PMID:27812332

  7. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    SciTech Connect

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  8. The Universal Transverse Mercator Code: A location code for disease reporting.

    PubMed

    Tinline, R R; Gregory, D

    1988-10-01

    Since November 1987, all rabies specimen reports submitted by Agriculture Canada's District Veterinary Officers have required a new location code, the Universal Transverse Mercator Code (UTMC). In addition to the previously required entries for county, district, legal address and mailing addresses, the new code is set up for computer analysis and mapping. It is capable of pinpointing the origin of the specimen to within 100 meters anywhere in Canada that is covered by the National Topographic System 1:50,000 maps. Because of its 100 meter spatial resolution, the code is of great interest to those studying the occurrence and spread of rabies. The code will also be important in the detailed planning and evaluation of the Ontario rabies control scheme, scheduled for 1988. Agriculture Canada anticipates that the UTMC will also be used for reporting other animal diseases as well as for emergency disease reporting.

  9. Visual mismatch negativity: a predictive coding view

    PubMed Central

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control “refractoriness” effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  10. A Comparative Study on Seismic Analysis of Bangladesh National Building Code (BNBC) with Other Building Codes

    NASA Astrophysics Data System (ADS)

    Bari, Md. S.; Das, T.

    2013-09-01

    Tectonic framework of Bangladesh and adjoining areas indicate that Bangladesh lies well within an active seismic zone. The after effect of earthquake is more severe in an underdeveloped and a densely populated country like ours than any other developed countries. Bangladesh National Building Code (BNBC) was first established in 1993 to provide guidelines for design and construction of new structure subject to earthquake ground motions in order to minimize the risk to life for all structures. A revision of BNBC 1993 is undergoing to make this up to date with other international building codes. This paper aims at the comparison of various provisions of seismic analysis as given in building codes of different countries. This comparison will give an idea regarding where our country stands when it comes to safety against earth quake. Primarily, various seismic parameters in BNBC 2010 (draft) have been studied and compared with that of BNBC 1993. Later, both 1993 and 2010 edition of BNBC codes have been compared graphically with building codes of other countries such as National Building Code of India 2005 (NBC-India 2005), American Society of Civil Engineering 7-05 (ASCE 7-05). The base shear/weight ratios have been plotted against the height of the building. The investigation in this paper reveals that BNBC 1993 has the least base shear among all the codes. Factored Base shear values of BNBC 2010 are found to have increased significantly than that of BNBC 1993 for low rise buildings (≤20 m) around the country than its predecessor. Despite revision of the code, BNBC 2010 (draft) still suggests less base shear values when compared to the Indian and American code. Therefore, this increase in factor of safety against the earthquake imposed by the proposed BNBC 2010 code by suggesting higher values of base shear is appreciable.

  11. CHEETAH: A fast thermochemical code for detonation

    SciTech Connect

    Fried, L.E.

    1993-11-01

    For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producing a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.

  12. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel A. Lazerson, S. Sakakibara and Y. Suzuki

    2013-03-12

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The code is validated against a vacuum shot on the Large Helical Device (LHD) where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the LHD.

  13. A Construction of Lossy Source Code Using LDPC Matrices

    NASA Astrophysics Data System (ADS)

    Miyake, Shigeki; Muramatsu, Jun

    Research into applying LDPC code theory, which is used for channel coding, to source coding has received a lot of attention in several research fields such as distributed source coding. In this paper, a source coding problem with a fidelity criterion is considered. Matsunaga et al. and Martinian et al. constructed a lossy code under the conditions of a binary alphabet, a uniform distribution, and a Hamming measure of fidelity criterion. We extend their results and construct a lossy code under the extended conditions of a binary alphabet, a distribution that is not necessarily uniform, and a fidelity measure that is bounded and additive and show that the code can achieve the optimal rate, rate-distortion function. By applying a formula for the random walk on lattice to the analysis of LDPC matrices on Zq, where q is a prime number, we show that results similar to those for the binary alphabet condition hold for Zq, the multiple alphabet condition.

  14. Python interface generator for Fortran based codes (a code development aid)

    SciTech Connect

    Grote, D. P.

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  15. Polar Codes

    DTIC Science & Technology

    2014-12-01

    density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes. iii CONTENTS EXECUTIVE SUMMARY...the most common. Many civilian systems use low density parity check (LDPC) FEC codes, and the Navy is planning to use LDPC for some future systems...other forward error correction methods: a turbo code, a low density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes

  16. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  17. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  18. A construction of quantum turbo product codes based on CSS-type quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    Xiao, Hailin; Ni, Ju; Xie, Wu; Ouyang, Shan

    As in classical coding theory, turbo product codes (TPCs) through serially concatenated block codes can achieve approximatively Shannon capacity limit and have low decoding complexity. However, special requirements in the quantum setting severely limit the structures of turbo product codes (QTPCs). To design a good structure for QTPCs, we present a new construction of QTPCs with the interleaved serial concatenation of CSS(L1,L2)-type quantum convolutional codes (QCCs). First, CSS(L1,L2)-type QCCs are proposed by exploiting the theory of CSS-type quantum stabilizer codes and QCCs, and the description and the analysis of encoder circuit are greatly simplified in the form of Hadamard gates and C-NOT gates. Second, the interleaved coded matrix of QTPCs is derived by quantum permutation SWAP gate definition. Finally, we prove the corresponding relation on the minimum Hamming distance of QTPCs associated with classical TPCs, and describe the state diagram of encoder and decoder of QTPCs that have a highly regular structure and simple design idea.

  19. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2010-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  20. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2011-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  1. What to do with a Dead Research Code

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  2. A draft model aggregated code of ethics for bioethicists.

    PubMed

    Baker, Robert

    2005-01-01

    Bioethicists function in an environment in which their peers--healthcare executives, lawyers, nurses, physicians--assert the integrity of their fields through codes of professional ethics. Is it time for bioethics to assert its integrity by developing a code of ethics? Answering in the affirmative, this paper lays out a case by reviewing the historical nature and function of professional codes of ethics. Arguing that professional codes are aggregative enterprises growing in response to a field's historical experiences, it asserts that bioethics now needs to assert its integrity and independence and has already developed a body of formal statements that could be aggregated to create a comprehensive code of ethics for bioethics. A Draft Model Aggregated Code of Ethics for Bioethicists is offered in the hope that analysis and criticism of this draft code will promote further discussion of the nature and content of a code of ethics for bioethicists.

  3. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Codes and standards. 50.55a Section 50.55a Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.55a Codes and standards. Each construction permit for a... meet the requirements of the ASME Boiler and Pressure Vessel Code specified in paragraphs (b), (c),...

  4. CALIOP: a multichannel design code for gas-cooled fast reactors. Code description and user's guide

    SciTech Connect

    Thompson, W.I.

    1980-10-01

    CALIOP is a design code for fluid-cooled reactors composed of parallel fuel tubes in hexagonal or cylindrical ducts. It may be used with gaseous or liquid coolants. It has been used chiefly for design of a helium-cooled fast breeder reactor and has built-in cross section information to permit calculations of fuel loading, breeding ratio, and doubling time. Optional cross-section input allows the code to be used with moderated cores and with other fuels.

  5. Studying genetic code by a matrix approach.

    PubMed

    Crowder, Tanner; Li, Chi-Kwong

    2010-05-01

    Following Petoukhov and his collaborators, we use two length n zero-one sequences, alpha and beta, to represent a length n genetic sequence (alpha/beta) so that the columns of (alpha/beta) have the following correspondence with the nucleotides: C ~ (0/0), U ~ (1/0), G ~ (1/1), A ~ (0/1). Using the Gray code ordering to arrange alpha and beta, we build a 2(n) x 2(n) matrix C(n) including all the 4(n) length n genetic sequences. Furthermore, we use the Hamming distance of alpha and beta to construct a 2(n) x 2(n) matrix D(n). We explore structures of these matrices, refine the results in earlier papers, and propose new directions for further research.

  6. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  7. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  8. Toward a Code of Conduct for Graduate Education

    ERIC Educational Resources Information Center

    Proper, Eve

    2012-01-01

    Most academic disciplines promulgate codes of ethics that serve as public statements of professional norms of their membership. These codes serve both symbolic and practical purposes, stating to both members and the larger public what a discipline's highest ethics are. This article explores what scholarly society codes of ethics could say about…

  9. A MCTF video coding scheme based on distributed source coding principles

    NASA Astrophysics Data System (ADS)

    Tagliasacchi, Marco; Tubaro, Stefano

    2005-07-01

    Motion Compensated Temporal Filtering (MCTF) has proved to be an efficient coding tool in the design of open-loop scalable video codecs. In this paper we propose a MCTF video coding scheme based on lifting where the prediction step is implemented using PRISM (Power efficient, Robust, hIgh compression Syndrome-based Multimedia coding), a video coding framework built on distributed source coding principles. We study the effect of integrating the update step at the encoder or at the decoder side. We show that the latter approach allows to improve the quality of the side information exploited during decoding. We present the analytical results obtained by modeling the video signal along the motion trajectories as a first order auto-regressive process. We show that the update step at the decoder allows to half the contribution of the quantization noise. We also include experimental results with real video data that demonstrate the potential of this approach when the video sequences are coded at low bitrates.

  10. Synaptic Plasticity as a Cortical Coding Scheme

    PubMed Central

    Froemke, Robert C.; Schreiner, Christoph E.

    2015-01-01

    Processing of auditory information requires constant adjustment due to alterations of the environment and changing conditions in the nervous system with age, health, and experience. Consequently, patterns of activity in cortical networks have complex dynamics over a wide range of timescales, from milliseconds to days and longer. In the primary auditory cortex (AI), multiple forms of adaptation and plasticity shape synaptic input and action potential output. However, the variance of neuronal responses has made it difficult to characterize AI receptive fields and to determine the function of AI in processing auditory information such as vocalizations. Here we describe recent studies on the temporal modulation of cortical responses and consider the relation of synaptic plasticity to neural coding. PMID:26497430

  11. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1992-01-01

    The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.

  12. Circular code motifs in transfer and 16S ribosomal RNAs: a possible translation code in genes.

    PubMed

    Michel, Christian J

    2012-04-01

    In 1996, a common trinucleotide circular code, called X, is identified in genes of eukaryotes and prokaryotes (Arquès and Michel, 1996). This circular code X is a set of 20 trinucleotides allowing the reading frames in genes to be retrieved locally, i.e. anywhere in genes and in particular without start codons. This reading frame retrieval needs a window length l of 12 nucleotides (l ≥ 12). With a window length strictly less than 12 nucleotides (l < 12), some words of X, called ambiguous words, are found in the shifted frames (the reading frame shifted by one or two nucleotides) preventing the reading frame in genes to be retrieved. Since 1996, these ambiguous words of X were never studied. In the first part of this paper, we identify all the ambiguous words of the common trinucleotide circular code X. With a length l varying from 1 to 11 nucleotides, the type and the occurrence number (multiplicity) of ambiguous words of X are given in each shifted frame. Maximal ambiguous words of X, words which are not factors of another ambiguous words, are also determined. Two probability definitions based on these results show that the common trinucleotide circular code X retrieves the reading frame in genes with a probability of about 90% with a window length of 6 nucleotides, and a probability of 99.9% with a window length of 9 nucleotides (100% with a window length of 12 nucleotides, by definition of a circular code). In the second part of this paper, we identify X circular code motifs (shortly X motifs) in transfer RNA and 16S ribosomal RNA: a tRNA X motif of 26 nucleotides including the anticodon stem-loop and seven 16S rRNA X motifs of length greater or equal to 15 nucleotides. Window lengths of reading frame retrieval with each trinucleotide of these X motifs are also determined. Thanks to the crystal structure 3I8G (Jenner et al., 2010), a 3D visualization of X motifs in the ribosome shows several spatial configurations involving mRNA X motifs, A-tRNA and E-tRNA X

  13. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    SciTech Connect

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  14. An Improved Canine Genome and a Comprehensive Catalogue of Coding Genes and Non-Coding Transcripts

    PubMed Central

    Hoeppner, Marc P.; Lundquist, Andrew; Pirun, Mono; Meadows, Jennifer R. S.; Zamani, Neda; Johnson, Jeremy; Sundström, Görel; Cook, April; FitzGerald, Michael G.; Swofford, Ross; Mauceli, Evan; Moghadam, Behrooz Torabi; Greka, Anna; Alföldi, Jessica; Abouelleil, Amr; Aftuck, Lynne; Bessette, Daniel; Berlin, Aaron; Brown, Adam; Gearin, Gary; Lui, Annie; Macdonald, J. Pendexter; Priest, Margaret; Shea, Terrance; Turner-Maier, Jason; Zimmer, Andrew; Lander, Eric S.; di Palma, Federica

    2014-01-01

    The domestic dog, Canis familiaris, is a well-established model system for mapping trait and disease loci. While the original draft sequence was of good quality, gaps were abundant particularly in promoter regions of the genome, negatively impacting the annotation and study of candidate genes. Here, we present an improved genome build, canFam3.1, which includes 85 MB of novel sequence and now covers 99.8% of the euchromatic portion of the genome. We also present multiple RNA-Sequencing data sets from 10 different canine tissues to catalog ∼175,000 expressed loci. While about 90% of the coding genes previously annotated by EnsEMBL have measurable expression in at least one sample, the number of transcript isoforms detected by our data expands the EnsEMBL annotations by a factor of four. Syntenic comparison with the human genome revealed an additional ∼3,000 loci that are characterized as protein coding in human and were also expressed in the dog, suggesting that those were previously not annotated in the EnsEMBL canine gene set. In addition to ∼20,700 high-confidence protein coding loci, we found ∼4,600 antisense transcripts overlapping exons of protein coding genes, ∼7,200 intergenic multi-exon transcripts without coding potential, likely candidates for long intergenic non-coding RNAs (lincRNAs) and ∼11,000 transcripts were reported by two different library construction methods but did not fit any of the above categories. Of the lincRNAs, about 6,000 have no annotated orthologs in human or mouse. Functional analysis of two novel transcripts with shRNA in a mouse kidney cell line altered cell morphology and motility. All in all, we provide a much-improved annotation of the canine genome and suggest regulatory functions for several of the novel non-coding transcripts. PMID:24625832

  15. An improved canine genome and a comprehensive catalogue of coding genes and non-coding transcripts.

    PubMed

    Hoeppner, Marc P; Lundquist, Andrew; Pirun, Mono; Meadows, Jennifer R S; Zamani, Neda; Johnson, Jeremy; Sundström, Görel; Cook, April; FitzGerald, Michael G; Swofford, Ross; Mauceli, Evan; Moghadam, Behrooz Torabi; Greka, Anna; Alföldi, Jessica; Abouelleil, Amr; Aftuck, Lynne; Bessette, Daniel; Berlin, Aaron; Brown, Adam; Gearin, Gary; Lui, Annie; Macdonald, J Pendexter; Priest, Margaret; Shea, Terrance; Turner-Maier, Jason; Zimmer, Andrew; Lander, Eric S; di Palma, Federica; Lindblad-Toh, Kerstin; Grabherr, Manfred G

    2014-01-01

    The domestic dog, Canis familiaris, is a well-established model system for mapping trait and disease loci. While the original draft sequence was of good quality, gaps were abundant particularly in promoter regions of the genome, negatively impacting the annotation and study of candidate genes. Here, we present an improved genome build, canFam3.1, which includes 85 MB of novel sequence and now covers 99.8% of the euchromatic portion of the genome. We also present multiple RNA-Sequencing data sets from 10 different canine tissues to catalog ∼175,000 expressed loci. While about 90% of the coding genes previously annotated by EnsEMBL have measurable expression in at least one sample, the number of transcript isoforms detected by our data expands the EnsEMBL annotations by a factor of four. Syntenic comparison with the human genome revealed an additional ∼3,000 loci that are characterized as protein coding in human and were also expressed in the dog, suggesting that those were previously not annotated in the EnsEMBL canine gene set. In addition to ∼20,700 high-confidence protein coding loci, we found ∼4,600 antisense transcripts overlapping exons of protein coding genes, ∼7,200 intergenic multi-exon transcripts without coding potential, likely candidates for long intergenic non-coding RNAs (lincRNAs) and ∼11,000 transcripts were reported by two different library construction methods but did not fit any of the above categories. Of the lincRNAs, about 6,000 have no annotated orthologs in human or mouse. Functional analysis of two novel transcripts with shRNA in a mouse kidney cell line altered cell morphology and motility. All in all, we provide a much-improved annotation of the canine genome and suggest regulatory functions for several of the novel non-coding transcripts.

  16. Codes and morals: is there a missing link? (The Nuremberg Code revisited).

    PubMed

    Hick, C

    1998-01-01

    Codes are a well known and popular but weak form of ethical regulation in medical practice. There is, however, a lack of research on the relations between moral judgments and ethical Codes, or on the possibility of morally justifying these Codes. Our analysis begins by showing, given the Nuremberg Code, how a typical reference to natural law has historically served as moral justification. We then indicate, following the analyses of H. T. Engelhardt, Jr., and A. MacIntyre, why such general moral justifications of codes must necessarily fail in a society of "moral strangers." Going beyond Engelhardt we argue, that after the genealogical suspicion in morals raised by Nietzsche, not even Engelhardt's "principle of permission" can be rationally justified in a strong sense--a problem of transcendental argumentation in morals already realized by I. Kant. Therefore, we propose to abandon the project of providing general justifications for moral judgements and to replace it with a hermeneutical analysis of ethical meanings in real-world situations, starting with the archetypal ethical situation, the encounter with the Other (E. Levinas).

  17. A hierarchical approach to coding chemical, biological and pharmaceutical substances.

    PubMed

    Keefe, Anya R; Bert, Joel L; Grace, John R; Makaroff, Sylvia J; Lang, Barbara J; Band, Pierre R

    2005-01-01

    This hierarchical coding system is designed to classify substances into successively subordinate categories on the basis of chemical, physical and biological properties. Although initially developed for occupational cancer epidemiological studies, it is general in nature and can be used for other purposes where a systematic approach is needed to catalogue or analyze large numbers of substances and/or physical properties. The coding system incorporates a multi level approach, where substances can be coded both on the basis of function and composition. On the first level, a three digit code is assigned to each substance to indicate its primary use in the occupational environment (e.g. pesticide, catalyst, adhesive). Substances can then be coded using a ten digit code to indicate structure and composition (e.g. organic molecule, biomolecule, pharmaceutical). Depending on the complexity required, analysis can incorporate the three digit code, ten digit code, or a combination of both. The approach to coding both chemical and biological agents is modeled in part after conventional approaches used by the International Union of Pure and Applied Chemists (IUPAC) and the International Union of Biochemists (IUB). Development of the coding system was initiated in the 1980's in response to a need for a system allowing analysis of individual agents as well classes or groups of substances. The project was undertaken as a collaborative venture between the BC Cancer Agency, Cancer Control Research program (then Division of Epidemiology) and the Department of Chemical and Biological Engineering at the University of British Columbia.

  18. A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Su, T. Y.; Kao, T. J.

    1991-01-01

    This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.

  19. A code generation framework for the ALMA common software

    NASA Astrophysics Data System (ADS)

    Troncoso, Nicolás; von Brand, Horst H.; Ibsen, Jorge; Mora, Matias; Gonzalez, Victor; Chiozzi, Gianluca; Jeram, Bogdan; Sommer, Heiko; Zamora, Gabriel; Tejeda, Alexis

    2010-07-01

    Code generation helps in smoothing the learning curve of a complex application framework and in reducing the number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model Driven code generation to transform directly an UML Model into a full implementation in the ACS framework. This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are: software reuse, implicit application of design patterns and automatic tests generation. A model driven approach to design makes it also possible using the same model with different frameworks, by generating for different targets. The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator. OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct mapping of data types, the main difficulty encountered in the translation process. The output is an ACS application readily usable by the developer, including the necessary deployment configuration, thus minimizing any configuration burden during testing. The specific application code is implemented by extending generated classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process and aiding the developers by keeping a clean logical separation between the two. Our first results show that code generation improves dramatically the code productivity.

  20. A code inspection process for security reviews

    SciTech Connect

    Garzoglio, Gabriele; /Fermilab

    2009-05-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  1. Proposal for a kava quality standardization code.

    PubMed

    Teschke, Rolf; Lebot, Vincent

    2011-10-01

    Rare cases of hepatotoxicity emerged with the use of kava drugs and dietary supplements prepared from rhizomes and roots of the South Pacific plant kava (Piper methysticum). Their psychoactive, anxiolytic, relaxing, and recreational ingredients are the kavalactones kavain, dihydrokavain, methysticin, dihydromethysticin, yangonin, and desmethoxyyangonin, but there is little evidence that these kavalactones or the non-kavalactones pipermethystine and flavokavain B are the culprits of the adverse hepatic reactions. It rather appears that poor quality of the kava material was responsible for the liver toxicity. Analysis of existing kava quality standardizations with focus on chemical, agricultural, manufacturing, nutritional, regulatory, and legislation backgrounds showed major shortcomings that could easily explain quality problems. We therefore suggest a uniform, internationally accepted device for kava quality standardizations that are in the interest of the consumers because of safety reasons and will meet the expectations of kava farmers, pharmaceutical manufacturers, regulators of agencies, and legislators. The initial step resides in the establishment of Pan-Pacific kava quality legislation as an important part of the proposed Kava Quality Standardization Code. In conclusion, a sophisticated approach to establish kava quality standardizations is needed for safe human use of kava as relaxing traditional beverages, the anxiolytic drugs, and recreational dietary supplements.

  2. A concatenated coded modulation scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, Tadao; Takata, Toyoo; Fujiwara, Toru; Lin, Shu

    1990-01-01

    A concatenated coded modulation scheme for error control in data communications is presented. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is particularly effective for high speed satellite communications for large file transfer where high reliability is required. Also presented is a simple method for constructing block codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft-decision Viterbi decoding algorithm.

  3. A GPU code for analytic continuation through a sampling method

    NASA Astrophysics Data System (ADS)

    Nordström, Johan; Schött, Johan; Locht, Inka L. M.; Di Marco, Igor

    We here present a code for performing analytic continuation of fermionic Green's functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU). The code is based on the sampling method introduced by Mishchenko et al. (2000), and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  4. Para: a computer simulation code for plasma driven electromagnetic launchers

    SciTech Connect

    Thio, Y.-C.

    1983-03-01

    A computer code for simulation of rail-type accelerators utilizing a plasma armature has been developed and is described in detail. Some time varying properties of the plasma are taken into account in this code thus allowing the development of a dynamical model of the behavior of a plasma in a rail-type electromagnetic launcher. The code is being successfully used to predict and analyse experiments on small calibre rail-gun launchers.

  5. A coded modulation design for the INMARSAT geostationary GLONASS augmentation

    NASA Astrophysics Data System (ADS)

    Stein, B.; Tsang, W.

    A cold modulation scheme is proposed to carryout the Global Navigation Satellite System (GLONASS) geostationary augmentation which includes both integrity and navigation functions over the next generation International Maritime Satellite Organization (INMARSAT) satellites. A baseline coded modulation scheme for the GLONASS augmentation broadcast proposes a forward error correction code over a differential phase shift keying (DPSK) modulation. The use of a concatenated code over the same signaling is considered. The proposed coded modulation design is more powerful and robust, yet not overly more complex in system implementation than the baseline scheme. Performance results of concatenated codes over a DPSK signaling used in the design are presented. The sensitivity analysis methodology in selecting the coded modulation scheme is also discussed.

  6. A Simple Tight Bound on Error Probability of Block Codes with Application to Turbo Codes

    NASA Astrophysics Data System (ADS)

    Divsalar, D.

    1999-07-01

    A simple bound on the probability of decoding error for block codes is derived in closed form. This bound is based on the bounding techniques developed by Gallager. We obtained an upper bound both on the word-error probability and the bit-error probability of block codes. The bound is simple, since it does not require any integration or optimization in its final version. The bound is tight since it works for signal-to-noise ratios (SNRs) very close to the Shannon capacity limit. The bound uses only the weight distribution of the code. The bound for nonrandom codes is tighter than the original Gallager bound and its new versions derived by Sason and Shamai and by Viterbi and Viterbi. It also is tighter than the recent simpler bound by Viterbi and Viterbi and simpler than the bound by Duman and Salehi, which requires two-parameter optimization. For long blocks, it competes well with more complex bounds that involve integration and parameter optimization, such as the tangential sphere bound by Poltyrev, elaborated by Sason and Shamai, and investigated by Viterbi and Viterbi, and the geometry bound by Dolinar, Ekroot, and Pollara. We also obtained a closed-form expression for the minimum SNR threshold that can serve as a tight upper bound on maximum-likelihood capacity of nonrandom codes. We also have shown that this minimum SNR threshold of our bound is the same as for the tangential sphere bound of Poltyrev. We applied this simple bound to turbo-like codes.

  7. A Semantic Analysis Method for Scientific and Engineering Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  8. RAYS: a geometrical optics code for EBT

    SciTech Connect

    Batchelor, D.B.; Goldfinger, R.C.

    1982-04-01

    The theory, structure, and operation of the code are described. Mathematical details of equilibrium subroutiones for slab, bumpy torus, and tokamak plasma geometry are presented. Wave dispersion and absorption subroutines are presented for frequencies ranging from ion cyclotron frequency to electron cyclotron frequency. Graphics postprocessors for RAYS output data are also described.

  9. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  10. A new integrated symmetrical table for genetic codes.

    PubMed

    Shu, Jian-Jun

    2017-01-01

    Degeneracy is a salient feature of genetic codes, because there are more codons than amino acids. The conventional table for genetic codes suffers from an inability of illustrating a symmetrical nature among genetic base codes. In fact, because the conventional wisdom avoids the question, there is little agreement as to whether the symmetrical nature actually even exists. A better understanding of symmetry and an appreciation for its essential role in the genetic code formation can improve our understanding of nature's coding processes. Thus, it is worth formulating a new integrated symmetrical table for genetic codes, which is presented in this paper. It could be very useful to understand the Nobel laureate Crick's wobble hypothesis - how one transfer ribonucleic acid can recognize two or more synonymous codons, which is an unsolved fundamental question in biological science.

  11. CHMWTR: A Plasma Chemistry Code for Water Vapor

    DTIC Science & Technology

    2012-02-01

    required to drive the discharge at the velocity v. II. OVERVIEW OF WATER PLASMA CHEMISTRY Upon photo-ionization by a mJ class ultra-short pulse laser...is unlimited. Unclassified Unclassified Unclassified UU 21 Daniel F. Gordon (202) 767-5036 Electrical discharge Water vapor The CHMWTR code tracks the...electrical discharges in water vapor, and describes a computer code designed to model such discharges . The code is called CHMWTR, in analogy with the NRL

  12. PLASIM: A computer code for simulating charge exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.

    1982-01-01

    The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.

  13. CALMAR: A New Versatile Code Library for Adjustment from Measurements

    NASA Astrophysics Data System (ADS)

    Grégoire, G.; Fausser, C.; Destouches, C.; Thiollay, N.

    2016-02-01

    CALMAR, a new library for adjustment has been developed. This code performs simultaneous shape and level adjustment of an initial prior spectrum from measured reactions rates of activation foils. It is written in C++ using the ROOT data analysis framework,with all linear algebra classes. STAYSL code has also been reimplemented in this library. Use of the code is very flexible : stand-alone, inside a C++ code, or driven by scripts. Validation and test cases are under progress. Theses cases will be included in the code package that will be available to the community. Future development are discussed. The code should support the new Generalized Nuclear Data (GND) format. This new format has many advantages compared to ENDF.

  14. RESRAD-CHEM: A computer code for chemical risk assessment

    SciTech Connect

    Cheng, J.J.; Yu, C.; Hartmann, H.M.; Jones, L.G.; Biwer, B.M.; Dovel, E.S.

    1993-10-01

    RESRAD-CHEM is a computer code developed at Argonne National Laboratory for the U.S. Department of Energy to evaluate chemically contaminated sites. The code is designed to predict human health risks from multipathway exposure to hazardous chemicals and to derive cleanup criteria for chemically contaminated soils. The method used in RESRAD-CHEM is based on the pathway analysis method in the RESRAD code and follows the U.S. Environmental Protection Agency`s (EPA`s) guidance on chemical risk assessment. RESRAD-CHEM can be used to evaluate a chemically contaminated site and, in conjunction with the use of the RESRAD code, a mixed waste site.

  15. A profile of coding staff in Sydney metropolitan public hospitals.

    PubMed

    McIntosh, Jean; Dimitropoulos, Vera; Bramley, Michelle

    2004-01-01

    This survey assessed the profiles of ICD-10-AM coding staff employed in 13 major, acute care public hospitals in Sydney, Australia, during a two-week period in 1999. Approximately 90% (56/61) of respondents gave their job title as Clinical Coder or Coding Clerk; of these, 20 (36%) were qualified Health Information Managers, of whom 10 coded for >or=90% of their work-time and three for <75% of the time. One quarter of all Clinical Coders/Coding Clerks spent >25% of their work time performing duties other than coding. Five Health Information Management (HIM) Clinical Coders/Coding Clerks were paid under the Clerical, rather than the HIM, Award.

  16. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  17. A bandwidth efficient coding scheme for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J., Jr.

    1991-01-01

    As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.

  18. CESAR: A Code for Nuclear Fuel and Waste Characterisation

    SciTech Connect

    Vidal, J.M.; Grouiller, J.P.; Launay, A.; Berthion, Y.; Marc, A.; Toubon, H.

    2006-07-01

    CESAR (Simplified Evolution Code Applied to Reprocessing) is a depletion code developed through a joint program between CEA and COGEMA. In the late 1980's, the first use of this code dealt with nuclear measurement at the Laboratories of the La Hague reprocessing plant. The use of CESAR was then extended to characterizations of all entrance materials and for characterisation, via tracer, of all produced waste. The code can distinguish more than 100 heavy nuclides, 200 fission products and 100 activation products, and it can characterise both the fuel and the structural material of the fuel. CESAR can also make depletion calculations from 3 months to 1 million years of cooling time. Between 2003-2005, the 5. version of the code was developed. The modifications were related to the harmonisation of the code's nuclear data with the JEF2.2 nuclear data file. This paper describes the code and explains the extensive use of this code at the La Hague reprocessing plant and also for prospective studies. The second part focuses on the modifications of the latest version, and describes the application field and the qualification of the code. Many companies and the IAEA use CESAR today. CESAR offers a Graphical User Interface, which is very user-friendly. (authors)

  19. A New Detailed Term Accounting Opacity Code: TOPAZ

    SciTech Connect

    Iglesias, C A; Chen, M H; Isaacs, W; Sonnad, V; Wilson, B G

    2004-04-28

    A new opacity code, TOPAZ, which explicitly includes configuration term structure in the bound-bound transitions is being developed. The goal is to extend the current capabilities of detailed term accounting opacity codes such as OPAL that are limited to lighter elements of astrophysical interest. At present, opacity calculations of heavier elements use statistical methods that rely on the presence of myriad spectral lines for accuracy. However, statistical approaches have been shown to be inadequate for astrophysical opacity calculations. An application of the TOPAZ code will be to study the limits of statistical methods. Comparisons of TOPAZ to other opacity codes as well as experiments are presented.

  20. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Codes and standards. 50.55a Section 50.55a Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF PRODUCTION AND UTILIZATION FACILITIES Issuance, Limitations, and Conditions of Licenses and Construction Permits § 50.55a Codes and standards. Each construction permit for a utilization facility is...

  1. What's in a code? Towards a formal account of the relation of ontologies and coding systems.

    PubMed

    Rector, Alan L

    2007-01-01

    Terminologies are increasingly based on "ontologies" developed in description logics and related languages such as the new Web Ontology Language, OWL. The use of description logic has been expected to reduce ambiguity and make it easier determine logical equivalence, deal with negation, and specify EHRs. However, this promise has not been fully realised: in part because early description logics were relatively inexpressive, in part, because the relation between coding systems, EHRs, and ontologies expressed in description logics has not been fully understood. This paper presents a unifying approach using the expressive formalisms available in the latest version of OWL, OWL 1.1.

  2. IGB grid: User's manual (A turbomachinery grid generation code)

    NASA Technical Reports Server (NTRS)

    Beach, T. A.; Hoffman, G.

    1992-01-01

    A grid generation code called IGB is presented for use in computational investigations of turbomachinery flowfields. It contains a combination of algebraic and elliptic techniques coded for use on an interactive graphics workstation. The instructions for use and a test case are included.

  3. The Development of a Discipline Code for Sue Bennett College.

    ERIC Educational Resources Information Center

    McLendon, Sandra F.

    A Student Discipline Code (SDC) was developed to govern student life at Sue Bennett College (SBC), Kentucky, a private two-year college affiliated with the Methodist Church. Steps taken in the process included the following: a review of relevant literature on student discipline; examination of discipline codes from six other educational…

  4. Roadmap Toward a Predictive Performance-based Commercial Energy Code

    SciTech Connect

    Rosenberg, Michael I.; Hart, Philip R.

    2014-10-01

    Energy codes have provided significant increases in building efficiency over the last 38 years, since the first national energy model code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, and the inability to handle control optimization that is specific to building type and use. This paper provides a high level review of different options for energy codes, including prescriptive, prescriptive packages, EUI Target, outcome-based, and predictive performance approaches. This paper also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria. A vision is outlined to serve as a roadmap for future commercial code development. That vision is based on code development being led by a specific approach to predictive energy performance combined with building specific prescriptive packages that are designed to be both cost-effective and to achieve a desired level of performance. Compliance with this new approach can be achieved by either meeting the performance target as demonstrated by whole building energy modeling, or by choosing one of the prescriptive packages.

  5. Code-Mixing as a Bilingual Instructional Strategy

    ERIC Educational Resources Information Center

    Jiang, Yih-Lin Belinda; García, Georgia Earnest; Willis, Arlette Ingram

    2014-01-01

    This study investigated code-mixing practices, specifically the use of L2 (English) in an L1 (Chinese) class in a U.S. bilingual program. Our findings indicate that the code-mixing practices made and prompted by the teacher served five pedagogical functions: (a) to enhance students' bilingualism and bilingual learning, (b) to review and…

  6. A new code for collisional drift kinetic equation solving

    SciTech Connect

    Reynolds, J. M.; Velasco, J. L.; Tarancon, A.; Lopez-Bruna, D.; Guasp, J.

    2008-11-02

    We introduce a new code of plasma transport based on evolving the Boltzmann equation in guiding center approximation where collisions has been taken into account. The spatial geometry is discretized using high order elements in space and a moment expansion in velocity space. First calculations with non-evolving electric field agree with the particle code ISDEP.

  7. Rationale for Student Dress Codes: A Review of School Handbooks

    ERIC Educational Resources Information Center

    Freeburg, Elizabeth W.; Workman, Jane E.; Lentz-Hees, Elizabeth S.

    2004-01-01

    Through dress codes, schools establish rules governing student appearance. This study examined stated rationales for dress and appearance codes in secondary school handbooks; 182 handbooks were received. Of 150 handbooks containing a rationale, 117 related dress and appearance regulations to students' right to a non-disruptive educational…

  8. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  9. A rate-transparent, self-clocking line code

    NASA Technical Reports Server (NTRS)

    Prucnal, Paul R.; Perrier, Philippe A.

    1987-01-01

    A reliable and economical new transmission code is presented with the following properties: zero dc content, baseband bandwidth conservation, self-clocking capability, and data-rate-transparent decoding and synchronization. Simple encoder/decoder and clock extractor circuits are given. The code is demonstrated in a wavelength-multiplexed fiber-optic communication system.

  10. Mock Code: A Code Blue Scenario Requested by and Developed for Registered Nurses

    PubMed Central

    Rideout, Janice; Pritchett-Kelly, Sherry; McDonald, Melissa; Mullins-Richards, Paula; Dubrowski, Adam

    2016-01-01

    The use of simulation in medical training is quickly becoming more common, with applications in emergency, surgical, and nursing education. Recently, registered nurses working in surgical inpatient units requested a mock code simulation to practice skills, improve knowledge, and build self-confidence in a safe and controlled environment. A simulation scenario using a high-fidelity mannequin was developed and will be discussed herein. PMID:28123919

  11. Student Attitudes toward a Medical School Honor Code.

    ERIC Educational Resources Information Center

    Brooks, C. Michael; And Others

    1981-01-01

    A survey to determine medical student perceptions of an honor code and the attitudes of medical students toward personal adherence to the provisions of an honor code at the University of Alabama School of Medicine is presented. Support was compromised by the reluctance of students to report suspected violations. (MLW)

  12. Coding as a Trojan Horse for Mathematics Education Reform

    ERIC Educational Resources Information Center

    Gadanidis, George

    2015-01-01

    The history of mathematics educational reform is replete with innovations taken up enthusiastically by early adopters without significant transfer to other classrooms. This paper explores the coupling of coding and mathematics education to create the possibility that coding may serve as a Trojan Horse for mathematics education reform. That is,…

  13. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  14. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  15. A novel unified coding analytical method for Internet of Things

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Zhang, JianHong

    2013-08-01

    This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.

  16. Coded excitation for diverging wave cardiac imaging: a feasibility study

    NASA Astrophysics Data System (ADS)

    Zhao, Feifei; Tong, Ling; He, Qiong; Luo, Jianwen

    2017-02-01

    Diverging wave (DW) based cardiac imaging has gained increasing interest in recent years given its capacity to achieve ultrahigh frame rate. However, the signal-to-noise ratio (SNR), contrast, and penetration depth of the resulting B-mode images are typically low as DWs spread energy over a large region. Coded excitation is known to be capable of increasing the SNR and penetration for ultrasound imaging. The aim of this study was therefore to test the feasibility of applying coded excitation in DW imaging to improve the corresponding SNR, contrast and penetration depth. To this end, two types of codes, i.e. a linear frequency modulated chirp code and a set of complementary Golay codes were tested in three different DW imaging schemes, i.e. 1 angle DW transmit without compounding, 3 and 5 angles DW transmits with coherent compounding. The performances (SNR, contrast ratio (CR), contrast-to-noise ratio (CNR), and penetration) of different imaging schemes were investigated by means of simulations and in vitro experiments. As for benchmark, corresponding DW imaging schemes with regular pulsed excitation as well as the conventional focused imaging scheme were also included. The results showed that the SNR was improved by about 10 dB using coded excitation while the penetration depth was increased by 2.5 cm and 1.8 cm using chirp code and Golay codes, respectively. The CNR and CR gains varied with the depth for different DW schemes using coded excitations. Specifically, for non-compounded DW imaging schemes, the gain in the CR was about 5 dB and 3 dB while the gain in the CNR was about 4.5 dB and 3.5 dB at larger depths using chirp code and Golay codes, respectively. For compounded imaging schemes, using coded excitation, the gain in the penetration and contrast were relatively smaller compared to non-compounded ones. Overall, these findings indicated the feasibility of coded excitation in improving the image quality of DW imaging. Preliminary in vivo cardiac images

  17. Electric utility value determination for wind energy. Volume II. A user's guide. [WTP code; WEIBUL code; ROSEN code; ULMOD code; FINAM code

    SciTech Connect

    Percival, David; Harper, James

    1981-02-01

    This report describes a method for determining the value of wind energy systems to electric utilities. It is performed by a package of computer models available from SERI that can be used with most utility planning models. The final output of these models gives a financial value ($/kW) of the wind energy system under consideration in the specific utility system. This volume, the second of two volumes, is a user's guide for the computer programs available from SERI. The first volume describes the value determination methodology and gives detailed discussion on each step of the computer modeling.

  18. Electric utility value determination for wind energy. Volume I. A methodology. [WTP code; WEIBUL code; ROSEN code; ULMOD code; FINAM code

    SciTech Connect

    Percival, David; Harper, James

    1981-02-01

    This report describes a method electric utilities can use to determine the value of wind energy systems. It is performed by a package of computer models available from SERI that can be used with most utility planning models. The final output of these models gives a financial value ($/kW) of the wind energy system under consideration in the specific utility system. This report, first of two volumes, describes the value determination method and gives detailed discussion on each computer program available from SERI. The second volume is a user's guide for these computer programs.

  19. A New AMR Code for Relativistic Magnetohydrodynamics in Dynamical Specetimes: Numerical Method and Code Validation

    NASA Astrophysics Data System (ADS)

    Liu, Yuk Tung; Etienne, Zachariah; Shapiro, Stuart

    2011-04-01

    The Illinois relativity group has written and tested a new GRMHD code, which is compatible with adaptive-mesh refinement (AMR) provided by the widely-used Cactus/Carpet infrastructure. Our code solves the Einstein-Maxwell-MHD system of coupled equations in full 3+1 dimensions, evolving the metric via the BSSN formalism and the MHD and magnetic induction equations via a conservative, high-resolution shock-capturing scheme. The induction equations are recast as an evolution equation for the magnetic vector potential. The divergenceless constraint div(B) = 0 is enforced by the curl of the vector potential. In simulations with uniform grid spacing, our MHD scheme is numerically equivalent to a commonly used, staggered-mesh constrained-transport scheme. We will present numerical method and code validation tests for both Minkowski and curved spacetimes. The tests include magnetized shocks, nonlinear Alfven waves, cylindrical explosions, cylindrical rotating disks, magnetized Bondi tests, and the collapse of a magnetized rotating star. Some of the more stringent tests involve black holes. We find good agreement between analytic and numerical solutions in these tests, and achieve convergence at the expected order.

  20. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    NASA Technical Reports Server (NTRS)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  1. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    NASA Technical Reports Server (NTRS)

    Schwab, J. R.; Povinelli, L. A.

    1983-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modeling of the vortex development near the suction surface.

  2. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  3. A highly specific coding system for structural chromosomal alterations.

    PubMed

    Martínez-Frías, M L; Martínez-Fernández, M L

    2013-04-01

    The Spanish Collaborative Study of Congenital Malformations (ECEMC, from the name in Spanish) has developed a very simple and highly specific coding system for structural chromosomal alterations. Such a coding system would be of value at present due to the dramatic increase in the diagnosis of submicroscopic chromosomal deletions and duplications through molecular techniques. In summary, our new coding system allows the characterization of: (a) the type of structural anomaly; (b) the chromosome affected; (c) if the alteration affects the short or/and the long arm, and (d) if it is a non-pure dicentric, a non-pure isochromosome, or if it affects several chromosomes. We show the distribution of 276 newborn patients with these types of chromosomal alterations using their corresponding codes according to our system. We consider that our approach may be useful not only for other registries, but also for laboratories performing these studies to store their results on case series. Therefore, the aim of this article is to describe this coding system and to offer the opportunity for this coding to be applied by others. Moreover, as this is a SYSTEM, rather than a fixed code, it can be implemented with the necessary modifications to include the specific objectives of each program.

  4. The Numerical Electromagnetics Code (NEC) - A Brief History

    SciTech Connect

    Burke, G J; Miller, E K; Poggio, A J

    2004-01-20

    The Numerical Electromagnetics Code, NEC as it is commonly known, continues to be one of the more widely used antenna modeling codes in existence. With several versions in use that reflect different levels of capability and availability, there are now 450 copies of NEC4 and 250 copies of NEC3 that have been distributed by Lawrence Livermore National Laboratory to a limited class of qualified recipients, and several hundred copies of NEC2 that had a recorded distribution by LLNL. These numbers do not account for numerous copies (perhaps 1000s) that were acquired through other means capitalizing on the open source code, the absence of distribution controls prior to NEC3 and the availability of versions on the Internet. In this paper we briefly review the history of the code that is concisely displayed in Figure 1. We will show how it capitalized on the research of prominent contributors in the early days of computational electromagnetics, how a combination of events led to the tri-service-supported code development program that ultimately led to NEC and how it evolved to the present day product. The authors apologize that space limitations do not allow us to provide a list of references or to acknowledge the numerous contributors to the code both of which can be found in the code documents.

  5. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  6. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  7. Depth perception with a rotationally symmetric coded camera

    NASA Astrophysics Data System (ADS)

    Chang, Chuan-Chung; Chen, Yung-Lin; Chang, Chir-Weei; Lee, Cheng-Chung

    2009-08-01

    A novel design of a phase coded depth-sensing camera is presented. A rotational symmetric phase mask is designed to discriminate the point spread functions (PSF) from different scene distances. The depth information can then be computationally obtained from a single captured photograph through a phase coded lens. The PSF must be carefully optimized at off-axis angles in order to create a restored image which is sharp over the required field of view. In this paper, a phase coded depth camera with a focal length 10.82mm, sensor size 2mm and F-number 5 is designed. Simulation data is exchanged between Matlab and Zemax for co-optimization of optical coding and digital decoding process. The simulation result shows that coarse depth information is investigated for object distance from 513 mm to 1000 mm.

  8. EMdeCODE: a novel algorithm capable of reading words of epigenetic code to predict enhancers and retroviral integration sites and to identify H3R2me1 as a distinctive mark of coding versus non-coding genes

    PubMed Central

    Santoni, Federico Andrea

    2013-01-01

    Existence of some extra-genetic (epigenetic) codes has been postulated since the discovery of the primary genetic code. Evident effects of histone post-translational modifications or DNA methylation over the efficiency and the regulation of DNA processes are supporting this postulation. EMdeCODE is an original algorithm that approximate the genomic distribution of given DNA features (e.g. promoter, enhancer, viral integration) by identifying relevant ChIPSeq profiles of post-translational histone marks or DNA binding proteins and combining them in a supermark. EMdeCODE kernel is essentially a two-step procedure: (i) an expectation-maximization process calculates the mixture of epigenetic factors that maximize the Sensitivity (recall) of the association with the feature under study; (ii) the approximated density is then recursively trimmed with respect to a control dataset to increase the precision by reducing the number of false positives. EMdeCODE densities improve significantly the prediction of enhancer loci and retroviral integration sites with respect to previous methods. Importantly, it can also be used to extract distinctive factors between two arbitrary conditions. Indeed EMdeCODE identifies unexpected epigenetic profiles specific for coding versus non-coding RNA, pointing towards a new role for H3R2me1 in coding regions. PMID:23234700

  9. Revisiting the Physico-Chemical Hypothesis of Code Origin: An Analysis Based on Code-Sequence Coevolution in a Finite Population

    NASA Astrophysics Data System (ADS)

    Bandhu, Ashutosh Vishwa; Aggarwal, Neha; Sengupta, Supratim

    2013-12-01

    The origin of the genetic code marked a major transition from a plausible RNA world to the world of DNA and proteins and is an important milestone in our understanding of the origin of life. We examine the efficacy of the physico-chemical hypothesis of code origin by carrying out simulations of code-sequence coevolution in finite populations in stages, leading first to the emergence of ten amino acid code(s) and subsequently to 14 amino acid code(s). We explore two different scenarios of primordial code evolution. In one scenario, competition occurs between populations of equilibrated code-sequence sets while in another scenario; new codes compete with existing codes as they are gradually introduced into the population with a finite probability. In either case, we find that natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. The code whose structure is most consistent with the standard genetic code is often not among the codes that have a high fixation probability. However, we find that the composition of the code population affects the code fixation probability. A physico-chemically optimized code gets fixed with a significantly higher probability if it competes against a set of randomly generated codes. Our results suggest that physico-chemical optimization may not be the sole driving force in ensuring the emergence of the standard genetic code.

  10. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  11. ORMEC: a three-dimensional MHD spectral inverse equilibrium code

    SciTech Connect

    Hirshman, S.P.; Hogan, J.T.

    1986-02-01

    The Oak Ridge Moments Equilibrium Code (ORMEC) is an efficient computer code that has been developed to calculate three-dimensional MHD equilibria using the inverse spectral method. The fixed boundary formulation, which is based on a variational principle for the spectral coefficients (moments) of the cylindrical coordinates R and Z, is described and compared with the finite difference code BETA developed by Bauer, Betancourt, and Garabedian. Calculations for the Heliotron, Wendelstein VIIA, and Advanced Toroidal Facility (ATF) configurations are performed to establish the accuracy and mesh convergence properties for the spectral method. 16 refs., 13 figs.

  12. Progress towards a world-wide code of conduct

    SciTech Connect

    Lee, J.A.N.; Berleur, J.

    1994-12-31

    In this paper the work of the International Federation for Information Processing (IFIP) Task Group on Ethics is described and the recommendations presented to the General Assembly are reviewed. While a common code of ethics or conduct has been not recommended for consideration by the member societies of IMP, a set of guidelines for the establishment and evaluation of codes has been produced and procedures for the assistance of code development have been established within IMP. This paper proposes that the data collected by the Task Group and the proposed guidelines can be used as a tool for the study of codes of practice providing a teachable, learnable educational module in courses related to the ethics of computing and computation, and looks at the next steps in bringing ethical awareness to the IT community.

  13. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  14. A code for calculating intrabeam scattering and beam lifetime

    SciTech Connect

    Kim, C.H.

    1997-05-01

    Beam emittances in a circular accelerator with a high beam intensity are strongly affected by the small angle intrabeam Coulomb scattering. In the computer simulation model the authors present here they used three coupled nonlinear differential equations to describe the evolution of the emittances in the transverse and the longitudinal planes. These equations include terms which take into account the intra-beam scattering, adiabatic damping, microwave instabilities, synchrotron damping, and quantum excitations. A code is generated to solve the equations numerically and incorporated into a FORTRAN code library. Circular high intensity physics routines are included in the library such as intrabeam scattering, Touschek scattering, and the bunch lengthening effect of higher harmonic cavities. The code runs presently in the PC environment. Description of the code and some examples are presented.

  15. POPCORN: A comparison of binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  16. The genetic code--more than just a table.

    PubMed

    Berleant, D; White, M; Pierce, E; Tudoreanu, E; Boeszoermenyi, A; Shtridelman, Y; Macosko, J C

    2009-01-01

    The standard codon table is a primary tool for basic understanding of molecular biology. In the minds of many, the table's orderly arrangement of bases and amino acids is synonymous with the true genetic code, i.e., the biological coding principle itself. However, developments in the field reveal a much more complex and interesting picture. In this article, we review the traditional codon table and its limitations in light of the true complexity of the genetic code. We suggest the codon table be brought up to date and, as a step, we present a novel superposition of the BLOSUM62 matrix and an allowed point mutation matrix. This superposition depicts an important aspect of the true genetic code-its ability to tolerate mutations and mistranslations.

  17. GPS receiver CODE bias estimation: A comparison of two methods

    NASA Astrophysics Data System (ADS)

    McCaffrey, Anthony M.; Jayachandran, P. T.; Themens, D. R.; Langley, R. B.

    2017-04-01

    The Global Positioning System (GPS) is a valuable tool in the measurement and monitoring of ionospheric total electron content (TEC). To obtain accurate GPS-derived TEC, satellite and receiver hardware biases, known as differential code biases (DCBs), must be estimated and removed. The Center for Orbit Determination in Europe (CODE) provides monthly averages of receiver DCBs for a significant number of stations in the International Global Navigation Satellite Systems Service (IGS) network. A comparison of the monthly receiver DCBs provided by CODE with DCBs estimated using the minimization of standard deviations (MSD) method on both daily and monthly time intervals, is presented. Calibrated TEC obtained using CODE-derived DCBs, is accurate to within 0.74 TEC units (TECU) in differenced slant TEC (sTEC), while calibrated sTEC using MSD-derived DCBs results in an accuracy of 1.48 TECU.

  18. Adaptive EZW coding using a rate-distortion criterion

    NASA Astrophysics Data System (ADS)

    Yin, Che-Yi

    2001-07-01

    This work presents a new method that improves on the EZW image coding algorithm. The standard EZW image coder uses a uniform quantizer with a threshold (deadzone) that is identical in all subbands. The quantization step sizes are not optimized under the rate-distortion sense. We modify the EZW by applying the Lagrange multiplier to search for the best step size for each subband and allocate the bit rate for each subband accordingly. Then we implement the adaptive EZW codec to code the wavelet coefficients. Two coding environments, independent and dependent, are considered for the optimization process. The proposed image coder retains all the good features of the EZW, namely, embedded coding, progressive transmission, order of the important bits, and enhances it through the rate-distortion optimization with respect to the step sizes.

  19. Teaching billing and coding to medical students: a pilot study.

    PubMed

    Tran, Jiaxin; Cennimo, David; Chen, Sophia; Altschuler, Eric L

    2013-08-12

    Complex billing practices cost the US healthcare system billions of dollars annually. Coding for outpatient office visits [known as Evaluation & Management (E&M) services] is commonly particularly fraught with errors. The best way to insure proper billing and coding by practicing physicians is to teach this as part of the medical school curriculum. Here, in a pilot study, we show that medical students can learn well the basic principles from lectures. This approach is easy to implement into a medical school curriculum.

  20. Teaching Billing and Coding to Medical Students: A Pilot Study.

    PubMed

    Tran, Jiaxin; Cennimo, David; Chen, Sophia; Altschuler, Eric L

    2013-01-01

    Complex billing practices cost the US healthcare system billions of dollars annually. Coding for outpatient office visits [known as Evaluation & Management (E&M) services] is commonly particularly fraught with errors. The best way to insure proper billing and coding by practicing physicians is to teach this as part of the medical school curriculum. Here, in a pilot study, we show that medical students can learn well the basic principles from lectures. This approach is easy to implement into a medical school curriculum.

  1. A finite element code for electric motor design

    NASA Technical Reports Server (NTRS)

    Campbell, C. Warren

    1994-01-01

    FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.

  2. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  3. A low memory zerotree coding for arbitrarily shaped objects.

    PubMed

    Su, Chorng-Yann; Wu, Bing-Fei

    2003-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is a computationally simple and efficient zerotree coding technique for image compression. However, the high working memory requirement is its main drawback for hardware realization. We present a low memory zerotree coder (LMZC), which requires much less working memory than SPIHT. The LMZC coding algorithm abandons the use of lists, defines a different tree structure, and merges the sorting pass and the refinement pass together. The main techniques of LMZC are the recursive programming and a top-bit scheme (TBS). In TBS, the top bits of transformed coefficients are used to store the coding status of coefficients instead of the lists used in SPIHT. In order to achieve high coding efficiency, shape-adaptive discrete wavelet transforms are used to transformation arbitrarily shaped objects. A compact emplacement of the transformed coefficients is also proposed to further reduce working memory. The LMZC carefully treats "don't care" nodes in the wavelet tree and does not use bits to code such nodes. Comparison of LMZC with SPIHT shows that for coding a 768 /spl times/ 512 color image, LMZC saves at least 5.3 MBytes of memory but only increases a little execution time and reduces minor peak signal-to noise ratio (PSNR) values, thereby making it highly promising for some memory limited applications.

  4. The Nuremberg Code and the Nuremberg Trial. A reappraisal.

    PubMed

    Katz, J

    1996-11-27

    The Nuremberg Code includes 10 principles to guide physician-investigators in experiments involving human subjects. These principles, particularly the first principle on "voluntary consent," primarily were based on legal concepts because medical codes of ethics existent at the time of the Nazi atrocities did not address consent and other safeguards for human subjects. The US judges who presided over the proceedings did not intend the Code to apply only to the case before them, to be a response to the atrocities committed by the Nazi physicians, or to be inapplicable to research as it is customarily carried on in medical institutions. Instead, a careful reading of the judgment suggests that they wrote the Code for the practice of human experimentation whenever it is being conducted.

  5. A comprehensive catalogue of the coding and non-coding transcripts of the human inner ear.

    PubMed

    Schrauwen, Isabelle; Hasin-Brumshtein, Yehudit; Corneveaux, Jason J; Ohmen, Jeffrey; White, Cory; Allen, April N; Lusis, Aldons J; Van Camp, Guy; Huentelman, Matthew J; Friedman, Rick A

    2016-03-01

    The mammalian inner ear consists of the cochlea and the vestibular labyrinth (utricle, saccule, and semicircular canals), which participate in both hearing and balance. Proper development and life-long function of these structures involves a highly complex coordinated system of spatial and temporal gene expression. The characterization of the inner ear transcriptome is likely important for the functional study of auditory and vestibular components, yet, primarily due to tissue unavailability, detailed expression catalogues of the human inner ear remain largely incomplete. We report here, for the first time, comprehensive transcriptome characterization of the adult human cochlea, ampulla, saccule and utricle of the vestibule obtained from patients without hearing abnormalities. Using RNA-Seq, we measured the expression of >50,000 predicted genes corresponding to approximately 200,000 transcripts, in the adult inner ear and compared it to 32 other human tissues. First, we identified genes preferentially expressed in the inner ear, and unique either to the vestibule or cochlea. Next, we examined expression levels of specific groups of potentially interesting RNAs, such as genes implicated in hearing loss, long non-coding RNAs, pseudogenes and transcripts subject to nonsense mediated decay (NMD). We uncover the spatial specificity of expression of these RNAs in the hearing/balance system, and reveal evidence of tissue specific NMD. Lastly, we investigated the non-syndromic deafness loci to which no gene has been mapped, and narrow the list of potential candidates for each locus. These data represent the first high-resolution transcriptome catalogue of the adult human inner ear. A comprehensive identification of coding and non-coding RNAs in the inner ear will enable pathways of auditory and vestibular function to be further defined in the study of hearing and balance. Expression data are freely accessible at https://www.tgen.org/home/research/research-divisions/neurogenomics/supplementary-data/inner-ear-transcriptome.aspx.

  6. A parallel and modular deformable cell Car-Parrinello code

    NASA Astrophysics Data System (ADS)

    Cavazzoni, Carlo; Chiarotti, Guido L.

    1999-12-01

    We have developed a modular parallel code implementing the Car-Parrinello [Phys. Rev. Lett. 55 (1985) 2471] algorithm including the variable cell dynamics [Europhys. Lett. 36 (1994) 345; J. Phys. Chem. Solids 56 (1995) 510]. Our code is written in Fortran 90, and makes use of some new programming concepts like encapsulation, data abstraction and data hiding. The code has a multi-layer hierarchical structure with tree like dependences among modules. The modules include not only the variables but also the methods acting on them, in an object oriented fashion. The modular structure allows easier code maintenance, develop and debugging procedures, and is suitable for a developer team. The layer structure permits high portability. The code displays an almost linear speed-up in a wide range of number of processors independently of the architecture. Super-linear speed up is obtained with a "smart" Fast Fourier Transform (FFT) that uses the available memory on the single node (increasing for a fixed problem with the number of processing elements) as temporary buffer to store wave function transforms. This code has been used to simulate water and ammonia at giant planet conditions for systems as large as 64 molecules for ˜50 ps.

  7. Understanding the NMC code of conduct: a student perspective.

    PubMed

    Sutcliffe, Hannah

    The Code, published by the Nursing and Midwifery Council (NMC) (2008), provides standards of performance and ethics for nurses and midwives, and is a means of safeguarding the health and wellbeing of the public. Guidance from the NMC may appear relatively straightforward, however it can be difficult to implement in practice. This article identifies specific challenges that nurses may be presented with when adhering to The Code, as well as more general issues in interpreting the standards.

  8. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  9. A Comprehensive Validation Approach Using The RAVEN Code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J; Rinaldi, Ivan; Giannetti, Fabio; Caruso, Gianfranco

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics needed to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.

  10. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1993-01-01

    Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.

  11. Estimation of ultrasonic attenuation in a bone using coded excitation.

    PubMed

    Nowicki, A; Litniewski, J; Secomski, W; Lewin, P A; Trots, I

    2003-11-01

    This paper describes a novel approach to estimate broadband ultrasound attenuation (BUA) in a bone structure in human in vivo using coded excitation. BUA is an accepted indicator for assessment of osteoporosis. In the tested approach a coded acoustic signal is emitted and then the received echoes are compressed into brief, high amplitude pulses making use of matched filters and correlation receivers. In this way the acoustic peak pressure amplitude probing the tissue can be markedly decreased whereas the average transmitted intensity increases proportionally to the length of the code. This paper examines the properties of three different transmission schemes, based on Barker code, chirp and Golay code. The system designed is capable of generating 16 bits complementary Golay code (CGC), linear frequency modulated (LFM) chirp and 13-bit Barker code (BC) at 0.5 and 1 MHz center frequencies. Both in vivo data acquired from healthy heel bones and in vitro data obtained from human calcaneus were examined and the comparison between the results using coded excitation and two cycles sine burst is presented. It is shown that CGC system allows the effective range of frequencies employed in the measurement of broadband acoustic energy attenuation in the trabecular bone to be doubled in comparison to the standard 0.5 MHz pulse transmission. The algorithm used to calculate the pairs of Golay sequences of the different length, which provide the temporal side-lobe cancellation is also presented. Current efforts are focused on adapting the system developed for operation in pulse-echo mode; this would allow examination and diagnosis of bones with limited access such as hip bone.

  12. FLASH: A finite element computer code for variably saturated flow

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.

  13. Developing a code of ethics for academics. Commentary on 'Ethics for all: differences across scientific society codes' (Bullock and Panicker).

    PubMed

    Fisher, Celia B

    2003-04-01

    This article discusses the possibilities and pitfalls of constructing a code of ethics for university professors. Professional, educational, legal, and policy questions regarding the goals, format, and content of an academic ethics code are raised and a series of aspirational principles and enforceable standards that might be included in such a document are presented for discussion and debate.

  14. Evolutionary analysis of DNA-protein-coding regions based on a genetic code cube metric.

    PubMed

    Sanchez, Robersy

    2014-01-01

    The right estimation of the evolutionary distance between DNA or protein sequences is the cornerstone of the current phylogenetic analysis based on distance methods. Herein, it is demonstrated that the Manhattan distance (dw), weighted by the evolutionary importance of the nucleotide bases in the codon, is a naturally derived metric in the standard genetic code cube inserted into the three-dimensional Euclidean space. Based on the application of distance dw, a novel evolutionary model is proposed. This model includes insertion/deletion mutations that are very important for cancer studies, but usually discarded in classical evolutionary models. In this study, the new evolutionary model was applied to the phylogenetic analysis of the DNA protein-coding regions of 13 mammal mitochondrial genomes and of four cancer genetic- susceptibility genes (ATM, BRCA1, BRCA2 and p53) from nine mammals. The opossum (a marsupial) was used as an out-group species for both sets of sequences. The new evolutionary model yielded the correct topology, while the current models failed to separate the evolutionarily distant species of mouse and opossum.

  15. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  16. HADES, A Code for Simulating a Variety of Radiographic Techniques

    SciTech Connect

    Aufderheide, M B; Henderson, G; von Wittenau, A; Slone, D M; Barty, A; Martz, Jr., H E

    2004-10-28

    It is often useful to simulate radiographic images in order to optimize imaging trade-offs and to test tomographic techniques. HADES is a code that simulates radiography using ray tracing techniques. Although originally developed to simulate X-Ray transmission radiography, HADES has grown to simulate neutron radiography over a wide range of energy, proton radiography in the 1 MeV to 100 GeV range, and recently phase contrast radiography using X-Rays in the keV energy range. HADES can simulate parallel-ray or cone-beam radiography through a variety of mesh types, as well as through collections of geometric objects. HADES was originally developed for nondestructive evaluation (NDE) applications, but could be a useful tool for simulation of portal imaging, proton therapy imaging, and synchrotron studies of tissue. In this paper we describe HADES' current capabilities and discuss plans for a major revision of the code.

  17. Programming a real code in a functional language (part 1)

    SciTech Connect

    Hendrickson, C.P.

    1991-09-10

    For some, functional languages hold the promise of allowing ease of programming massively parallel computers that imperative languages such as Fortran and C do not offer. At LLNL, we have initiated a project to write the physics of a major production code in Sisal, a functional language developed at LLNL in collaboration with researchers throughout the world. We are investigating the expressibility of Sisal, as well as its performance on a shared-memory multiprocessor, the Y-MP. An interesting aspect of the project is that Sisal modules can call Fortran modules, and are callable by them. This eliminates the rewriting of 80% of the production code that would not benefit from parallel execution. Preliminary results indicate that the restrictive nature of the language does not cause problems in expressing the algorithms we have chosen. Some interesting aspects of programming in a mixed functional-imperative environment have surfaced, but can be managed. 8 refs.

  18. A systems neurophysiology approach to voluntary event coding.

    PubMed

    Petruo, Vanessa A; Stock, Ann-Kathrin; Münchau, Alexander; Beste, Christian

    2016-07-15

    Mechanisms responsible for the integration of perceptual events and appropriate actions (sensorimotor processes) have been subject to intense research. Different theoretical frameworks have been put forward with the "Theory of Event Coding (TEC)" being one of the most influential. In the current study, we focus on the concept of 'event files' within TEC and examine what sub-processes being dissociable by means of cognitive-neurophysiological methods are involved in voluntary event coding. This was combined with EEG source localization. We also introduce reward manipulations to delineate the neurophysiological sub-processes most relevant for performance variations during event coding. The results show that processes involved in voluntary event coding included predominantly stimulus categorization, feature unbinding and response selection, which were reflected by distinct neurophysiological processes (the P1, N2 and P3 ERPs). On a system's neurophysiological level, voluntary event-file coding is thus related to widely distributed parietal-medial frontal networks. Attentional selection processes (N1 ERP) turned out to be less important. Reward modulated stimulus categorization in parietal regions likely reflecting aspects of perceptual decision making but not in other processes. The perceptual categorization stage appears central for voluntary event-file coding.

  19. A need for a code of ethics in science communication?

    NASA Astrophysics Data System (ADS)

    Benestad, R. E.

    2009-09-01

    The modern western civilization and high standard of living are to a large extent the 'fruits' of scientific endeavor over generations. Some examples include the longer life expectancy due to progress in medical sciences, and changes in infrastructure associated with the utilization of electromagnetism. Modern meteorology is not possible without the state-of-the-art digital computers, satellites, remote sensing, and communications. Science also is of relevance for policy making, e.g. the present hot topic of climate change. Climate scientists have recently become much exposed to media focus and mass communications, a task for which many are not trained. Furthermore, science, communication, and politics have different objectives, and do not necessarily mix. Scientists have an obligation to provide unbiased information, and a code of ethics is needed to give a guidance for acceptable and unacceptable conduct. Some examples of questionable conduct in Norway include using the title 'Ph.D' to imply scientific authority when the person never had obtained such an academic degree, or writing biased and one-sided articles in Norwegian encyclopedia that do not reflect the scientific consensus. It is proposed here that a set of guide lines (for the scientists and journalists) and a code of conduct could provide recommendation for regarding how to act in media - similar to a code of conduct with respect to carrying out research - to which everyone could agree, even when disagreeing on specific scientific questions.

  20. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  1. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  2. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  3. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  4. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  5. FIBWR: a steady-state core flow distribution code for boiling water reactors code verification and qualification report. Final report

    SciTech Connect

    Ansari, A.F.; Gay, R.R.; Gitnick, B.J.

    1981-07-01

    A steady-state core flow distribution code (FIBWR) is described. The ability of the recommended models to predict various pressure drop components and void distribution is shown by comparison to the experimental data. Application of the FIBWR code to the Vermont Yankee Nuclear Power Station is shown by comparison to the plant measured data.

  6. BEAM: a Monte Carlo code to simulate radiotherapy treatment units.

    PubMed

    Rogers, D W; Faddegon, B A; Ding, G X; Ma, C M; We, J; Mackie, T R

    1995-05-01

    This paper describes BEAM, a general purpose Monte Carlo code to simulate the radiation beams from radiotherapy units including high-energy electron and photon beams, 60Co beams and orthovoltage units. The code handles a variety of elementary geometric entities which the user puts together as needed (jaws, applicators, stacked cones, mirrors, etc.), thus allowing simulation of a wide variety of accelerators. The code is not restricted to cylindrical symmetry. It incorporates a variety of powerful variance reduction techniques such as range rejection, bremsstrahlung splitting and forcing photon interactions. The code allows direct calculation of charge in the monitor ion chamber. It has the capability of keeping track of each particle's history and using this information to score separate dose components (e.g., to determine the dose from electrons scattering off the applicator). The paper presents a variety of calculated results to demonstrate the code's capabilities. The calculated dose distributions in a water phantom irradiated by electron beams from the NRC 35 MeV research accelerator, a Varian Clinac 2100C, a Philips SL75-20, an AECL Therac 20 and a Scanditronix MM50 are all shown to be in good agreement with measurements at the 2 to 3% level. Eighteen electron spectra from four different commercial accelerators are presented and various aspects of the electron beams from a Clinac 2100C are discussed. Timing requirements and selection of parameters for the Monte Carlo calculations are discussed.

  7. Suppressing feedback in a distributed video coding system by employing real field codes

    NASA Astrophysics Data System (ADS)

    Louw, Daniel J.; Kaneko, Haruhiko

    2013-12-01

    Single-view distributed video coding (DVC) is a video compression method that allows for the computational complexity of the system to be shifted from the encoder to the decoder. The reduced encoding complexity makes DVC attractive for use in systems where processing power or energy use at the encoder is constrained, for example, in wireless devices and surveillance systems. One of the biggest challenges in implementing DVC systems is that the required rate must be known at the encoder. The conventional approach is to use a feedback channel from the decoder to control the rate. Feedback channels introduce their own difficulties such as increased latency and buffering requirements, which makes the resultant system unsuitable for some applications. Alternative approaches, which do not employ feedback, suffer from either increased encoder complexity due to performing motion estimation at the encoder, or an inaccurate rate estimate. Inaccurate rate estimates can result in a reduced average rate-distortion performance, as well as unpleasant visual artifacts. In this paper, the authors propose a single-view DVC system that does not require a feedback channel. The consequences of inaccuracies in the rate estimate are addressed by using codes defined over the real field and a decoder employing successive refinement. The result is a codec with performance that is comparable to that of a feedback-based system at low rates without the use of motion estimation at the encoder or a feedback path. The disadvantage of the approach is a reduction in average rate-distortion performance in the high-rate regime for sequences with significant motion.

  8. GPU Optimizations for a Production Molecular Docking Code.

    PubMed

    Landaverde, Raphael; Herbordt, Martin C

    2014-09-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.

  9. A Data Parallel Multizone Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1995-01-01

    We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.

  10. A new balanced modulation code for a phase-image-based holographic data storage system

    NASA Astrophysics Data System (ADS)

    John, Renu; Joseph, Joby; Singh, Kehar

    2005-08-01

    We propose a new balanced modulation code for coding data pages for phase-image-based holographic data storage systems. The new code addresses the coding subtleties associated with phase-based systems while performing a content-based search in a holographic database. The new code, which is a balanced modulation code, is a modification of the existing 8:12 modulation code, and removes the false hits that occur in phase-based content-addressable systems due to phase-pixel subtractions. We demonstrate the better performance of the new code using simulations and experiments in terms of discrimination ratio while content addressing through a holographic memory. The new code is compared with the conventional coding scheme to analyse the false hits due to subtraction of phase pixels.

  11. Overview of WARP, a particle code for Heavy Ion Fusion

    SciTech Connect

    Friedman, A.; Grote, D.P.; Callahan, D.A.; Langdon, A.B.; Haber, I.

    1993-02-22

    The beams in a Heavy Ion beam driven inertial Fusion (HIF) accelerator must be focused onto small spots at the fusion target, and so preservation of beam quality is crucial. The nonlinear self-fields of these space-charge-dominated beams can lead to emittance growth; thus a self-consistent field description is necessary. We have developed a multi-dimensional discrete-particle simulation code, WARP, and are using it to study the behavior of HIF beams. The code`s 3d package combines features of an accelerator code and a particle-in-cell plasma simulation, and can efficiently track beams through many lattice elements and around bends. We have used the code to understand the physics of aggressive drift-compression in the MBE-4 experiment at Lawrence Berkeley Laboratory (LBL). We have applied it to LBL`s planned ILSE experiments, to various ``recirculator`` configurations, and to the study of equilibria and equilibration processes. Applications of the 3d package to ESQ injectors, and of the r, z package to longitudinal stability in driver beams, are discussed in related papers.

  12. A TDM link with channel coding and digital voice.

    NASA Technical Reports Server (NTRS)

    Jones, M. W.; Tu, K.; Harton, P. L.

    1972-01-01

    The features of a TDM (time-division multiplexed) link model are described. A PCM telemetry sequence was coded for error correction and multiplexed with a digitized voice channel. An all-digital implementation of a variable-slope delta modulation algorithm was used to digitize the voice channel. The results of extensive testing are reported. The measured coding gain and the system performance over a Gaussian channel are compared with theoretical predictions and computer simulations. Word intelligibility scores are reported as a measure of voice channel performance.

  13. DSD - A Particle Simulation Code for Modeling Dusty Plasmas

    NASA Astrophysics Data System (ADS)

    Joyce, Glenn; Lampe, Martin; Ganguli, Gurudas

    1999-11-01

    The NRL Dynamically Shielded Dust code (DSD) is a particle simulation code developed to study the behavior of strongly coupled, dusty plasmas. The model includes the electrostatic wake effects of plasma ions flowing through plasma electrons, collisions of dust and plasma particles with each other and with neutrals. The simulation model contains the short-range strong forces of a shielded Coulomb system, and the long-range forces that are caused by the wake. It also includes other effects of a flowing plasma such as drag forces. In order to model strongly coupled dust in plasmas, we make use of the techniques of molecular dynamics simulation, PIC simulation, and the "particle-particle/particle-mesh" (P3M) technique of Hockney and Eastwood. We also make use of the dressed test particle representation of Rostoker and Rosenbluth. Many of the techniques we use in the model are common to all PIC plasma simulation codes. The unique properties of the code follow from the accurate representation of both the short-range aspects of the interaction between dust grains, and long-range forces mediated by the complete plasma dielectric response. If the streaming velocity is zero, the potential used in the model reduces to the Debye-Huckel potential, and the simulation is identical to molecular dynamics models of the Yukawa potential. The plasma appears only implicitly through the plasma dispersion function, so it is not necessary in the code to resolve the fast plasma time scales.

  14. The (not so) social Simon effect: a referential coding account.

    PubMed

    Dolk, Thomas; Hommel, Bernhard; Prinz, Wolfgang; Liepelt, Roman

    2013-10-01

    The joint go-nogo Simon effect (social Simon effect, or joint cSE) has been considered as an index of automatic action/task co-representation. Recent findings, however, challenge extreme versions of this social co-representation account by suggesting that the (joint) cSE results from any sufficiently salient event that provides a reference for spatially coding one's own action. By manipulating the salient nature of reference-providing events in an auditory go-nogo Simon task, the present study indeed demonstrates that spatial reference events do not necessarily require social (Experiment 1) or movement features (Experiment 2) to induce action coding. As long as events attract attention in a bottom-up fashion (e.g., auditory rhythmic features; Experiment 3 and 4), events in an auditory go-nogo Simon task seem to be co-represented irrespective of the agent or object producing these events. This suggests that the cSE does not necessarily imply the co-representation of tasks. The theory of event coding provides a comprehensive account of the available evidence on the cSE: the presence of another salient event requires distinguishing the cognitive representation of one's own action from the representation of other events, which can be achieved by referential coding-the spatial coding of one's action relative to the other events.

  15. Multisynaptic activity in a pyramidal neuron model and neural code.

    PubMed

    Ventriglia, Francesco; Di Maio, Vito

    2006-01-01

    The highly irregular firing of mammalian cortical pyramidal neurons is one of the most striking observation of the brain activity. This result affects greatly the discussion on the neural code, i.e. how the brain codes information transmitted along the different cortical stages. In fact it seems to be in favor of one of the two main hypotheses about this issue, named the rate code. But the supporters of the contrasting hypothesis, the temporal code, consider this evidence inconclusive. We discuss here a leaky integrate-and-fire model of a hippocampal pyramidal neuron intended to be biologically sound to investigate the genesis of the irregular pyramidal firing and to give useful information about the coding problem. To this aim, the complete set of excitatory and inhibitory synapses impinging on such a neuron has been taken into account. The firing activity of the neuron model has been studied by computer simulation both in basic conditions and allowing brief periods of over-stimulation in specific regions of its synaptic constellation. Our results show neuronal firing conditions similar to those observed in experimental investigations on pyramidal cortical neurons. In particular, the variation coefficient (CV) computed from the inter-spike intervals (ISIs) in our simulations for basic conditions is close to the unity as that computed from experimental data. Our simulation shows also different behaviors in firing sequences for different frequencies of stimulation.

  16. A unified model of the standard genetic code

    PubMed Central

    Morgado, Eberto R.

    2017-01-01

    The Rodin–Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  17. A unified model of the standard genetic code.

    PubMed

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  18. Parallel Processing of a Groundwater Contaminant Code

    SciTech Connect

    Arnett, Ronald Chester; Greenwade, Lance Eric

    2000-05-01

    The U. S. Department of Energy’s Idaho National Engineering and Environmental Laboratory (INEEL) is conducting a field test of experimental enhanced bioremediation of trichoroethylene (TCE) contaminated groundwater. TCE is a chlorinated organic substance that was used as a solvent in the early years of the INEEL and disposed in some cases to the aquifer. There is an effort underway to enhance the natural bioremediation of TCE by adding a non-toxic substance that serves as a feed material for the bacteria that can biologically degrade the TCE.

  19. Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.

    2000-01-01

    The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.

  20. A combinatorial code for pattern formation in Drosophila oogenesis

    PubMed Central

    Yakoby, N.; Bristow, C.A.; Gong, D.; Schafer, X.; Lembong, J.; Zartman, J.J.; Halfon, M.S.; Schüpbach, T.; Shvartsman, S.Y.

    2010-01-01

    Summary Two-dimensional patterning of the follicular epithelium in Drosophila oogenesis is required for the formation of three-dimensional eggshell structures. Our analysis of a large number of published gene expression patterns in the follicle cells suggested that they follow a simple combinatorial code, based on six spatial building blocks and the operations of union, difference, intersection, and addition. The building blocks are related to the distribution of the inductive signals, provided by the highly conserved EGFR and DPP pathways. We demonstrated the validity of the code by testing it against a set of newly identified expression patterns, obtained in a large-scale transcriptional profiling experiment. Using the proposed code, we distinguished 36 distinct patterns for 81 genes expressed in the follicular epithelium and characterized their joint dynamics over four stages of oogenesis. This work provides the first systematic analysis of the diversity and dynamics of two-dimensional gene expression patterns in a developing tissue. PMID:19000837

  1. A neural coding scheme reproducing foraging trajectories

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-12-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series.

  2. Imaging The Genetic Code of a Virus

    NASA Astrophysics Data System (ADS)

    Graham, Jenna; Link, Justin

    2013-03-01

    Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University

  3. A neural coding scheme reproducing foraging trajectories

    PubMed Central

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-01-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series. PMID:26648311

  4. Incorporation of Condensation Heat Transfer in a Flow Network Code

    NASA Technical Reports Server (NTRS)

    Anthony, Miranda; Majumdar, Alok; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    In this paper we have investigated the condensation of water vapor in a short tube. A numerical model of condensation heat transfer was incorporated in a flow network code. The flow network code that we have used in this paper is Generalized Fluid System Simulation Program (GFSSP). GFSSP is a finite volume based flow network code. Four different condensation models were presented in the paper. Soliman's correlation has been found to be the most stable in low flow rates which is of particular interest in this application. Another highlight of this investigation is conjugate or coupled heat transfer between solid or fluid. This work was done in support of NASA's International Space Station program.

  5. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  6. SCAMPI: A code package for cross-section processing

    SciTech Connect

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  7. A sweet code for glycoprotein folding.

    PubMed

    Caramelo, Julio J; Parodi, Armando J

    2015-11-14

    Glycoprotein synthesis is initiated in the endoplasmic reticulum (ER) lumen upon transfer of a glycan (Glc3Man9GlcNAc2) from a lipid derivative to Asn residues (N-glycosylation). N-Glycan-dependent quality control of glycoprotein folding in the ER prevents exit to Golgi of folding intermediates, irreparably misfolded glycoproteins and incompletely assembled multimeric complexes. It also enhances folding efficiency by preventing aggregation and facilitating formation of proper disulfide bonds. The control mechanism essentially involves four components, resident lectin-chaperones (calnexin and calreticulin) that recognize monoglucosylated polymannose protein-linked glycans, lectin-associated oxidoreductase acting on monoglucosylated glycoproteins (ERp57), a glucosyltransferase that creates monoglucosylated epitopes in protein-linked glycans (UGGT) and a glucosidase (GII) that removes the glucose units added by UGGT. This last enzyme is the only mechanism component sensing glycoprotein conformations as it creates monoglucosylated glycans exclusively in not properly folded glycoproteins or in not completely assembled multimeric glycoprotein complexes. Glycoproteins that fail to properly fold are eventually driven to proteasomal degradation in the cytosol following the ER-associated degradation pathway, in which the extent of N-glycan demannosylation by ER mannosidases play a relevant role in the identification of irreparably misfolded glycoproteins.

  8. NASTRAN as a resource in code development

    NASA Technical Reports Server (NTRS)

    Stanton, E. L.; Crain, L. M.; Neu, T. F.

    1975-01-01

    A case history is presented in which the NASTRAN system provided both guidelines and working software for use in the development of a discrete element program, PATCHES-111. To avoid duplication and to take advantage of the wide spread user familiarity with NASTRAN, the PATCHES-111 system uses NASTRAN bulk data syntax, NASTRAN matrix utilities, and the NASTRAN linkage editor. Problems in developing the program are discussed along with details on the architecture of the PATCHES-111 parametric cubic modeling system. The system includes model construction procedures, checkpoint/restart strategies, and other features.

  9. Requirements for a multifunctional code architecture

    SciTech Connect

    Tiihonen, O.; Juslin, K.

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed.

  10. Combat Injury Coding: A Review and Reconfiguration

    DTIC Science & Technology

    2013-01-01

    Edwin D’Souza, MS, Ross R. Vickers, PhD, Vern Wing , MS, Brian J. Eastridge, MD, Lee Ann Young, MS, Judy Dye, MSN, Mary Ann Spott, MPA, MBA, Donald H...Vickers R. R., Wing V., Eastridge B. J., Young L. A., Dye J., Spott M. A., Jenkins D. H., Holcomb J., Blackbourne L. H., Ficke J. R., Kalin E. J...the clavicle, scapula , and pelvic girdle were grouped with the torso where they are anatomically located rather than in the upper and lower extremities

  11. The neural code for written words: a proposal.

    PubMed

    Dehaene, Stanislas; Cohen, Laurent; Sigman, Mariano; Vinckier, Fabien

    2005-07-01

    How is reading, a cultural invention, coded by neural populations in the human brain? The neural code for written words must be abstract, because we can recognize words regardless of their location, font and size. Yet it must also be exquisitely sensitive to letter identity and letter order. Most existing coding schemes are insufficiently invariant or incompatible with the constraints of the visual system. We propose a tentative neuronal model according to which part of the occipito-temporal 'what' pathway is tuned to writing and forms a hierarchy of local combination detectors sensitive to increasingly larger fragments of words. Our proposal can explain why the detection of 'open bigrams' (ordered pairs of letters) constitutes an important stage in visual word recognition.

  12. A Method for Automated Program Code Testing

    ERIC Educational Resources Information Center

    Drasutis, Sigitas; Motekaityte, Vida; Noreika, Algirdas

    2010-01-01

    The Internet has recently encouraged the society to convert almost all its needs to electronic resources such as e-libraries, e-cultures, e-entertainment as well as e-learning, which has become a radical idea to increase the effectiveness of learning services in most schools, colleges and universities. E-learning can not be completely featured and…

  13. BTREE: A FORTRAN Code for B+ Tree.

    DTIC Science & Technology

    2014-09-26

    such large databases. NSWC TR 85-54 REFERENCES 1. Comer , D., "The Ubiquitous B Tree," Computing Surveys, Vol. 11, 1979, pp. 121-137. 2. Knuth, D...34The Ubiquitous B Tree" by Douglas Comer , Computing Surveys, C 11(1979)121-137; a more complete discussion can be found in C "The Art of Computer

  14. Chemical ubiquitination for decrypting a cellular code

    PubMed Central

    Stanley, Mathew; Virdee, Satpal

    2016-01-01

    The modification of proteins with ubiquitin (Ub) is an important regulator of eukaryotic biology and deleterious perturbation of this process is widely linked to the onset of various diseases. The regulatory capacity of the Ub signal is high and, in part, arises from the capability of Ub to be enzymatically polymerised to form polyubiquitin (polyUb) chains of eight different linkage types. These distinct polyUb topologies can then be site-specifically conjugated to substrate proteins to elicit a number of cellular outcomes. Therefore, to further elucidate the biological significance of substrate ubiquitination, methodologies that allow the production of defined polyUb species, and substrate proteins that are site-specifically modified with them, are essential to progress our understanding. Many chemically inspired methods have recently emerged which fulfil many of the criteria necessary for achieving deeper insight into Ub biology. With a view to providing immediate impact in traditional biology research labs, the aim of this review is to provide an overview of the techniques that are available for preparing Ub conjugates and polyUb chains with focus on approaches that use recombinant protein building blocks. These approaches either produce a native isopeptide, or analogue thereof, that can be hydrolysable or non-hydrolysable by deubiquitinases. The most significant biological insights that have already been garnered using such approaches will also be summarized. PMID:27208213

  15. StarFinder: A code for stellar field analysis

    NASA Astrophysics Data System (ADS)

    Diolaiti, Emiliano; Bendinelli, Orazio; Bonaccini, Domenico; Close, Laird M.; Currie, Doug G.; Parmeggiani, Gianluigi

    2000-11-01

    StarFinder is an IDL code for the deep analysis of stellar fields, designed for Adaptive Optics well-sampled images with high and low Strehl ratio. The Point Spread Function is extracted directly from the frame, to take into account the actual structure of the instrumental response and the atmospheric effects. The code is written in IDL language and organized in the form of a self-contained widget-based application, provided with a series of tools for data visualization and analysis. A description of the method and some applications to Adaptive Optics data are presented.

  16. LOOPREF: A Fluid Code for the Simulation of Coronal Loops

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda; Antiochos, Spiro; Spicer, Daniel

    1998-01-01

    This report documents the code LOOPREF. LOOPREF is a semi-one dimensional finite element code that is especially well suited to simulate coronal-loop phenomena. It has a full implementation of adaptive mesh refinement (AMR), which is crucial for this type of simulation. The AMR routines are an improved version of AMR1D. LOOPREF's versatility makes is suitable to simulate a wide variety of problems. In addition to efficiently providing very high resolution in rapidly changing regions of the domain, it is equipped to treat loops of variable cross section, any non-linear form of heat conduction, shocks, gravitational effects, and radiative loss.

  17. A Survey of Electric Laser Codes.

    DTIC Science & Technology

    1983-06-01

    William F. Bailey (513) 255-2012 RfcD Associates Peter Crowell (S05) 844-3013 Joint Inst, for Lab. Astrophysik L. C. Pitchford (303) 492-8255...Morns (213) 341-9172 Physical Science, inc. Paul Lewi» (617) 933-8500 also Raymond Taylor (617) 546-7798 Rocketdyne E. Wheat ley (213... Pitchford (originator) Organization: to 11/28/BO J.I.L.A. Address:’-1- of Colorado, Boulder, CO 80309 After 1/1/81-Sandia Laboratories

  18. A HYDROCHEMICAL HYBRID CODE FOR ASTROPHYSICAL PROBLEMS. I. CODE VERIFICATION AND BENCHMARKS FOR A PHOTON-DOMINATED REGION (PDR)

    SciTech Connect

    Motoyama, Kazutaka; Morata, Oscar; Hasegawa, Tatsuhiko; Shang, Hsien; Krasnopolsky, Ruben

    2015-07-20

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H{sub 2} are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  19. Performance of a space-time block coded code division multiple access system over Nakagami-m fading channels

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Dong, Tao; Xu, Dazhuan; Bi, Guangguo

    2010-09-01

    By introducing an orthogonal space-time coding scheme, multiuser code division multiple access (CDMA) systems with different space time codes are given, and corresponding system performance is investigated over a Nakagami-m fading channel. A low-complexity multiuser receiver scheme is developed for space-time block coded CDMA (STBC-CDMA) systems. The scheme can make full use of the complex orthogonality of space-time block coding to simplify the high decoding complexity of the existing scheme. Compared to the existing scheme with exponential decoding complexity, it has linear decoding complexity. Based on the performance analysis and mathematical calculation, the average bit error rate (BER) of the system is derived in detail for integer m and non-integer m, respectively. As a result, a tight closed-form BER expression is obtained for STBC-CDMA with an orthogonal spreading code, and an approximate closed-form BER expression is attained for STBC-CDMA with a quasi-orthogonal spreading code. Simulation results show that the proposed scheme can achieve almost the same performance as the existing scheme with low complexity. Moreover, the simulation results for average BER are consistent with the theoretical analysis.

  20. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  1. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  2. GRADSPMHD: A parallel MHD code based on the SPH formalism

    NASA Astrophysics Data System (ADS)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a

  3. Towards a 3D Space Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathl, R. K.; Cicomptta, F. A.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    High-speed computational procedures for space radiation shielding have relied on asymptotic expansions in terms of the off-axis scatter and replacement of the general geometry problem by a collection of flat plates. This type of solution was derived for application to human rated systems in which the radius of the shielded volume is large compared to the off-axis diffusion limiting leakage at lateral boundaries. Over the decades these computational codes are relatively complete and lateral diffusion effects are now being added. The analysis for developing a practical full 3D space shielding code is presented.

  4. ELEFANT: a user-friendly multipurpose geodynamics code

    NASA Astrophysics Data System (ADS)

    Thieulot, C.

    2014-07-01

    A new finite element code for the solution of the Stokes and heat transport equations is presented. It has purposely been designed to address geological flow problems in two and three dimensions at crustal and lithospheric scales. The code relies on the Marker-in-Cell technique and Lagrangian markers are used to track materials in the simulation domain which allows recording of the integrated history of deformation; their (number) density is variable and dynamically adapted. A variety of rheologies has been implemented including nonlinear thermally activated dislocation and diffusion creep and brittle (or plastic) frictional models. The code is built on the Arbitrary Lagrangian Eulerian kinematic description: the computational grid deforms vertically and allows for a true free surface while the computational domain remains of constant width in the horizontal direction. The solution to the large system of algebraic equations resulting from the finite element discretisation and linearisation of the set of coupled partial differential equations to be solved is obtained by means of the efficient parallel direct solver MUMPS whose performance is thoroughly tested, or by means of the WISMP and AGMG iterative solvers. The code accuracy is assessed by means of many geodynamically relevant benchmark experiments which highlight specific features or algorithms, e.g., the implementation of the free surface stabilisation algorithm, the (visco-)plastic rheology implementation, the temperature advection, the capacity of the code to handle large viscosity contrasts. A two-dimensional application to salt tectonics presented as case study illustrates the potential of the code to model large scale high resolution thermo-mechanically coupled free surface flows.

  5. You've Written a Cool Astronomy Code! Now What Do You Do with It?

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.

    2014-01-01

    Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.

  6. A Radiation Shielding Code for Spacecraft and Its Validation

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.

    2000-01-01

    The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.

  7. Comparative noise performance of a coded aperture spectral imager

    NASA Astrophysics Data System (ADS)

    Piper, Jonathan; Yuen, Peter; Godfree, Peter; Ding, Mengjia; Soori, Umair; Selvagumar, Senthurran; James, David

    2016-10-01

    Novel types of spectral sensors using coded apertures may offer various advantages over conventional designs, especially the possibility of compressive measurements that could exceed the expected spatial, temporal or spectral resolution of the system. However, the nature of the measurement process imposes certain limitations, especially on the noise performance of the sensor. This paper considers a particular type of coded-aperture spectral imager and uses analytical and numerical modelling to compare its expected noise performance with conventional hyperspectral sensors. It is shown that conventional sensors may have an advantage in conditions where signal levels are high, such as bright light or slow scanning, but that coded-aperture sensors may be advantageous in low-signal conditions.

  8. Validation of a comprehensive space radiation transport code.

    PubMed

    Shinn, J L; Cucinotta, F A; Simonsen, L C; Wilson, J W; Badavi, F F; Badhwar, G D; Miller, J; Zeitlin, C; Heilbronn, L; Tripathi, R K; Clowdsley, M S; Heinbockel, J H; Xapsos, M A

    1998-12-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation.

  9. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    NASA Astrophysics Data System (ADS)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  10. Wolof Syllable Structure: Evidence from a Secret Code.

    ERIC Educational Resources Information Center

    Ka, Omar

    A structural analysis provides new evidence concerning the internal structure of the syllable in Wolof, a West African language, through examination of the secret code called Kall, spoken mainly in Senegal's Ceneba area. It is proposed that Kall is better described as involving primarily a reduplication of the prosodic word. The first section…

  11. Breaking the Genetic Code in a Letter by Max Delbruck.

    ERIC Educational Resources Information Center

    Fox, Marty

    1996-01-01

    Describes a classroom exercise that uses a letter from Max Delbruck to George Beadle to stimulate interest in the mechanics of a nonoverlapping comma-free code. Enables students to participate in the rich history of molecular biology and illustrates to them that scientists and science can be fun. (JRH)

  12. A Learning Environment for English Vocabulary Using Quick Response Codes

    ERIC Educational Resources Information Center

    Arikan, Yuksel Deniz; Ozen, Sevil Orhan

    2015-01-01

    This study focuses on the process of developing a learning environment that uses tablets and Quick Response (QR) codes to enhance participants' English language vocabulary knowledge. The author employed the concurrent triangulation strategy, a mixed research design. The study was conducted at a private school in Izmir, Turkey during the 2012-2013…

  13. A code for hadrontherapy treatment planning with the voxelscan method.

    PubMed

    Berga, S; Bourhaleb, F; Cirio, R; Derkaoui, J; Gallice, B; Hamal, M; Marchetto, F; Rolando, V; Viscomi, S

    2000-11-01

    A code for the implementation of treatment plannings in hadrontherapy with an active scan beam is presented. The package can determine the fluence and energy of the beams for several thousand voxels in a few minutes. The performances of the program have been tested with a full simulation.

  14. Ethical coding.

    PubMed

    Resnik, Barry I

    2009-01-01

    It is ethical, legal, and proper for a dermatologist to maximize income through proper coding of patient encounters and procedures. The overzealous physician can misinterpret reimbursement requirements or receive bad advice from other physicians and cross the line from aggressive coding to coding fraud. Several of the more common problem areas are discussed.

  15. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  16. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  17. LACEwING: A New Moving Group Analysis Code

    NASA Astrophysics Data System (ADS)

    Riedel, Adric R.; Blunt, Sarah C.; Lambrides, Erini L.; Rice, Emily L.; Cruz, Kelle L.; Faherty, Jacqueline K.

    2017-03-01

    We present a new nearby young moving group (NYMG) kinematic membership analysis code, LocAting Constituent mEmbers In Nearby Groups (LACEwING), a new Catalog of Suspected Nearby Young Stars, a new list of bona fide members of moving groups, and a kinematic traceback code. LACEwING is a convergence-style algorithm with carefully vetted membership statistics based on a large numerical simulation of the Solar Neighborhood. Given spatial and kinematic information on stars, LACEwING calculates membership probabilities in 13 NYMGs and three open clusters within 100 pc. In addition to describing the inputs, methods, and products of the code, we provide comparisons of LACEwING to other popular kinematic moving group membership identification codes. As a proof of concept, we use LACEwING to reconsider the membership of 930 stellar systems in the Solar Neighborhood (within 100 pc) that have reported measurable lithium equivalent widths. We quantify the evidence in support of a population of young stars not attached to any NYMGs, which is a possible sign of new as-yet-undiscovered groups or of a field population of young stars.

  18. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  19. NERO- a post-maximum supernova radiation transport code

    NASA Astrophysics Data System (ADS)

    Maurer, I.; Jerkstrand, A.; Mazzali, P. A.; Taubenberger, S.; Hachinger, S.; Kromer, M.; Sim, S.; Hillebrandt, W.

    2011-12-01

    The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a non-local thermodynamic equilibrium (NLTE) treatment of radiation transport has rarely been studied. In this paper, we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand, Fransson & Kozma and to the nebular code of Mazzali et al. All three codes have been developed independently and a comparison provides a valuable opportunity to investigate their reliability. Currently, NERO is one-dimensional and can be used for predicting spectra of synthetic explosion models or for deriving SN properties by spectral modelling. To demonstrate this, we study the spectra of the 'normal' Type Ia supernova (SN Ia) 2005cf between 50 and 350 days after the explosion and identify most of the common SN Ia line features at post-maximum epochs.

  20. MR image compression using a wavelet transform coding algorithm.

    PubMed

    Angelidis, P A

    1994-01-01

    We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.

  1. Evaluating QR Code Case Studies Using a Mobile Learning Framework

    ERIC Educational Resources Information Center

    Rikala, Jenni

    2014-01-01

    The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…

  2. Dependent video coding using a tree representation of pixel dependencies

    NASA Astrophysics Data System (ADS)

    Amati, Luca; Valenzise, Giuseppe; Ortega, Antonio; Tubaro, Stefano

    2011-09-01

    Motion-compensated prediction induces a chain of coding dependencies between pixels in video. In principle, an optimal selection of encoding parameters (motion vectors, quantization parameters, coding modes) should take into account the whole temporal horizon of a GOP. However, in practical coding schemes, these choices are made on a frame-by-frame basis, thus with a possible loss of performance. In this paper we describe a tree-based model for pixelwise coding dependencies: each pixel in a frame is the child of a pixel in a previous reference frame. We show that some tree structures are more favorable than others from a rate-distortion perspective, e.g., because they entail a large descendance of pixels which are well predicted from a common ancestor. In those cases, a higher quality has to be assigned to pixels at the top of such trees. We promote the creation of these structures by adding a special discount term to the conventional Lagrangian cost adopted at the encoder. The proposed model can be implemented through a double-pass encoding procedure. Specifically, we devise heuristic cost functions to drive the selection of quantization parameters and of motion vectors, which can be readily implemented into a state-of-the-art H.264/AVC encoder. Our experiments demonstrate that coding efficiency is improved for video sequences with low motion, while there are no apparent gains for more complex motion. We argue that this is due to both the presence of complex encoder features not captured by the model, and to the complexity of the source to be encoded.

  3. A new hydrodynamics code for Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Leung, S.-C.; Chu, M.-C.; Lin, L.-M.

    2015-12-01

    A two-dimensional hydrodynamics code for Type Ia supernova (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing, we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric light curves from the hydrodynamics results. We also use a GPU to speed up the computation of some highly repetitive subroutines. We achieve an acceleration of 50 times for some subroutines and a factor of 6 in the global run time.

  4. GERMINAL — A computer code for predicting fuel pin behaviour

    NASA Astrophysics Data System (ADS)

    Melis, J. C.; Roche, L.; Piron, J. P.; Truffert, J.

    1992-06-01

    In the frame of the R and D on FBR fuels, CEA/DEC is developing the computer code GERMINAL to study the fuel pin thermal-mechanical behaviour during steady-state and incidental conditions. The development of GERMINAL is foreseen in two steps: (1) The GERMINAL 1 code designed as a "working horse" for immediate applications. The version 1 of GERMINAL 1 is presently delivered fully documented with a physical qualification guaranteed up to 8 at%. (2) The version 2 of GERMINAL 1, in addition to what is presently treated in GERMINAL 1 includes the treatment of high burnup effects on the fission gas release and the fuel-clad joint. This version, GERMINAL 1.2, is presently under testing and will be completed up to the end of 1991. The GERMINAL 2 code designed as a reference code for future applications will cover all the aspects of GERMINAL 1 (including high burnup effects) with a more general mechanical treatment, and a completely revised and advanced informatical structure.

  5. HINCOF-1: a Code for Hail Ingestion in Engine Inlets

    NASA Technical Reports Server (NTRS)

    Gopalaswamy, N.; Murthy, S. N. B.

    1995-01-01

    One of the major concerns during hail ingestion into an engine is the resulting amount and space- and time-wise distribution of hail at the engine face for a given geometry of inlet and set of atmospheric and flight conditions. The appearance of hail in the capture streamtube is invariably random in space and time, with respect to size and momentum. During the motion of a hailstone through an inlet, a hailstone undergoes several processes, namely impact with other hailstones and material surfaces of the inlet and spinner, rolling and rebound following impact; heat and mass transfer; phase change; and shattering, the latter three due to friction and impact. Taking all of these factors into account, a numerical code, designated HINCOF-I, has been developed for determining the motion hailstones from the atmosphere, through an inlet, and up to the engine face. The numerical procedure is based on the Monte-Carlo method. The report presents a description of the code, along with several illustrative cases. The code can be utilized to relate the spinner geometry - conical or, more effective, elliptical - to the possible diversion of hail at the engine face into the bypass stream. The code is also useful for assessing the influence of various hail characteristics on the ingestion and distribution of hailstones over the engine face.

  6. Bio—Cryptography: A Possible Coding Role for RNA Redundancy

    NASA Astrophysics Data System (ADS)

    Regoli, M.

    2009-03-01

    The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. The RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions," are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behavior in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.

  7. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  8. A post-processor for the PEST code

    NASA Astrophysics Data System (ADS)

    Preische, S.; Manickam, J.; Johnson, J. L.

    1993-08-01

    A new post-processor has been developed for use with output from the PEST tokamak stability code. It allows us to use quantities calculated by PEST and take better advantage of the physical picture of the plasma instability which they can provide. This will improve comparison with experimentally measured quantities as well as facilitate understanding of theoretical studies.

  9. Code-Switching in a College Mathematics Classroom

    ERIC Educational Resources Information Center

    Chitera, Nancy

    2009-01-01

    This paper presents the findings that explored from the discourse practices of the mathematics teacher educators in initial teacher training colleges in Malawi. It examines how mathematics teacher educators construct a multilingual classroom and how they view code-switching. The discussion is based on pre-observation interviews with four…

  10. A high performance spectral code for nonlinear MHD stability

    SciTech Connect

    Taylor, M.

    1992-09-01

    A new spectral code, NSTAB, has been developed to do nonlinear stability and equilibrium calculations for the magnetohydrodynamic (MHD) equations in three dimensional toroidal geometries. The code has the resolution to test nonlinear stability by calculating bifurcated equilibria directly. These equilibria consist of weak solutions with current sheets near rational surfaces and other less localized modes. Bifurcated equilibria with a pronounced current sheet where the rotational transform crosses unity are calculated for the International Thermonuclear Experimental Reactor (ITER). Bifurcated solutions with broader resonances are found for the LHD stellarator currently being built in Japan and an optimized configuration like the Wendelstein VII-X proposed for construction in Germany. The code is able to handle the many harmonics required to capture the high mode number of these instabilities. NSTAB builds on the highly successful BETAS code, which applies the spectral method to a flux coordinate formulation of the variational principle associated with the MHD equilibrium equations. However, a new residue condition for the location of the magnetic axis has been developed and implemented. This condition is based on the weak formulation of the equations and imposes no constraints on the inner flux surfaces.

  11. Codes of Ethics in Australian Education: Towards a National Perspective

    ERIC Educational Resources Information Center

    Forster, Daniella J.

    2012-01-01

    Teachers have a dual moral responsibility as both values educators and moral agents representing the integrity of the profession. Codes of ethics and conduct in teaching articulate shared professional values and aim to provide some guidance for action around recognised issues special to the profession but are also instruments of regulation which…

  12. Design and implementation of a channel decoder with LDPC code

    NASA Astrophysics Data System (ADS)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  13. A NEW CODE FOR PROTO-NEUTRON STAR EVOLUTION

    SciTech Connect

    Roberts, L. F.

    2012-08-20

    A new code for following the evolution and emissions of proto-neutron stars during the first minute of their lives is developed and tested. The code is one dimensional, fully implicit, and general relativistic. Multi-group, multi-flavor neutrino transport is incorporated that makes use of variable Eddington factors obtained from a formal solution of the static general relativistic Boltzmann equation with linearized scattering terms. The timescales of neutrino emission and spectral evolution obtained using the new code are broadly consistent with previous results. Unlike other recent calculations, however, the new code predicts that the neutrino-driven wind will be characterized, at least for part of its existence, by a neutron excess. This change, potentially consequential for nucleosynthesis in the wind, is due to an improved treatment of the charged current interactions of electron-flavored neutrinos and anti-neutrinos with nucleons. A comparison is also made between the results obtained using either variable Eddington factors or simple equilibrium flux-limited diffusion. The latter approximation, which has been frequently used in previous studies of proto-neutron star cooling, accurately describes the total neutrino luminosities (to within 10%) for most of the evolution, until the proto-neutron star becomes optically thin.

  14. Connecting Neural Coding to Number Cognition: A Computational Account

    ERIC Educational Resources Information Center

    Prather, Richard W.

    2012-01-01

    The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…

  15. The GOES Time Code Service, 1974–2004: A Retrospective

    PubMed Central

    Lombardi, Michael A.; Hanson, D. Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  16. A Comparison of Source Code Plagiarism Detection Engines

    ERIC Educational Resources Information Center

    Lancaster, Thomas; Culwin, Fintan

    2004-01-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and…

  17. RAMSES: A new N-body and hydrodynamical code

    NASA Astrophysics Data System (ADS)

    Teyssier, Romain

    2010-11-01

    A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

  18. TESS: A RELATIVISTIC HYDRODYNAMICS CODE ON A MOVING VORONOI MESH

    SciTech Connect

    Duffell, Paul C.; MacFadyen, Andrew I. E-mail: macfadyen@nyu.edu

    2011-12-01

    We have generalized a method for the numerical solution of hyperbolic systems of equations using a dynamic Voronoi tessellation of the computational domain. The Voronoi tessellation is used to generate moving computational meshes for the solution of multidimensional systems of conservation laws in finite-volume form. The mesh-generating points are free to move with arbitrary velocity, with the choice of zero velocity resulting in an Eulerian formulation. Moving the points at the local fluid velocity makes the formulation effectively Lagrangian. We have written the TESS code to solve the equations of compressible hydrodynamics and magnetohydrodynamics for both relativistic and non-relativistic fluids on a dynamic Voronoi mesh. When run in Lagrangian mode, TESS is significantly less diffusive than fixed mesh codes and thus preserves contact discontinuities to high precision while also accurately capturing strong shock waves. TESS is written for Cartesian, spherical, and cylindrical coordinates and is modular so that auxiliary physics solvers are readily integrated into the TESS framework and so that this can be readily adapted to solve general systems of equations. We present results from a series of test problems to demonstrate the performance of TESS and to highlight some of the advantages of the dynamic tessellation method for solving challenging problems in astrophysical fluid dynamics.

  19. A surface definition code for turbine blade surfaces

    SciTech Connect

    Yang, S L; Oryang, D; Ho, M J

    1992-05-01

    A numerical interpolation scheme has been developed for generating the three-dimensional geometry of wind turbine blades. The numerical scheme consists of (1) creating the frame of the blade through the input of two or more airfoils at some specific spanwise stations and then scaling and twisting them according to the prescribed distributions of chord, thickness, and twist along the span of the blade; (2) transforming the physical coordinates of the blade frame into a computational domain that complies with the interpolation requirements; and finally (3) applying the bi-tension spline interpolation method, in the computational domain, to determine the coordinates of any point on the blade surface. Detailed descriptions of the overall approach to and philosophy of the code development are given along with the operation of the code. To show the usefulness of the bi-tension spline interpolation code developed, two examples are given, namely CARTER and MICON blade surface generation. Numerical results are presented in both graphic data forms. The solutions obtained in this work show that the computer code developed can be a powerful tool for generating the surface coordinates for any three-dimensional blade.

  20. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    NASA Astrophysics Data System (ADS)

    Massimo, F.; Atzeni, S.; Marocchino, A.

    2016-12-01

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for the solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.

  1. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  2. Numerical simulations of hydrodynamic instabilities: Perturbation codes PANSY, PERLE, and 2D code CHIC applied to a realistic LIL target

    NASA Astrophysics Data System (ADS)

    Hallo, L.; Olazabal-Loumé, M.; Maire, P. H.; Breil, J.; Morse, R.-L.; Schurtz, G.

    2006-06-01

    This paper deals with ablation front instabilities simulations in the context of direct drive ICF. A simplified DT target, representative of realistic target on LIL is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our Bi-dimensional hydrodynamic code Chic. Numerical solutions are shown to converge, in good agreement with analytical models.

  3. 46 CFR Appendix A to Part 520 - Standard Terminology and Codes

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 9 2011-10-01 2011-10-01 false Standard Terminology and Codes A Appendix A to Part 520... AUTOMATED TARIFFS Pt. 520, App. A Appendix A to Part 520—Standard Terminology and Codes I—Publishing/Amendment Type Codes Code Definition A Increase. C Change resulting in neither increase nor decrease in...

  4. 46 CFR Appendix A to Part 520 - Standard Terminology and Codes

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 9 2014-10-01 2014-10-01 false Standard Terminology and Codes A Appendix A to Part 520... AUTOMATED TARIFFS Pt. 520, App. A Appendix A to Part 520—Standard Terminology and Codes I—Publishing/Amendment Type Codes Code Definition A Increase. C Change resulting in neither increase nor decrease in...

  5. 46 CFR Appendix A to Part 520 - Standard Terminology and Codes

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Standard Terminology and Codes A Appendix A to Part 520... AUTOMATED TARIFFS Pt. 520, App. A Appendix A to Part 520—Standard Terminology and Codes I—Publishing/Amendment Type Codes Code Definition A Increase. C Change resulting in neither increase nor decrease in...

  6. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.

  7. DgSMC-B code: A robust and autonomous direct simulation Monte Carlo code for arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Kargaran, H.; Minuchehr, A.; Zolfaghari, A.

    2016-07-01

    In this paper, we describe the structure of a new Direct Simulation Monte Carlo (DSMC) code that takes advantage of combinatorial geometry (CG) to simulate any rarefied gas flows Medias. The developed code, called DgSMC-B, has been written in FORTRAN90 language with capability of parallel processing using OpenMP framework. The DgSMC-B is capable of handling 3-dimensional (3D) geometries, which is created with first-and second-order surfaces. It performs independent particle tracking for the complex geometry without the intervention of mesh. In addition, it resolves the computational domain boundary and volume computing in border grids using hexahedral mesh. The developed code is robust and self-governing code, which does not use any separate code such as mesh generators. The results of six test cases have been presented to indicate its ability to deal with wide range of benchmark problems with sophisticated geometries such as airfoil NACA 0012. The DgSMC-B code demonstrates its performance and accuracy in a variety of problems. The results are found to be in good agreement with references and experimental data.

  8. 50 CFR Table 2a to Part 679 - Species Codes: FMP Groundfish

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Species Codes: FMP Groundfish 2a Table 2a... ALASKA Pt. 679, Table 2a Table 2a to Part 679—Species Codes: FMP Groundfish Species description Code Atka mackerel (greenling) 193 Flatfish, miscellaneous (flatfish species without separate codes) 120...

  9. Code Switching and Code-Mixing as a Communicative Strategy in Multilingual Discourse.

    ERIC Educational Resources Information Center

    Tay, Mary W. J.

    1989-01-01

    Examines how code switching and mixing are used as communication strategies in multilingual communities and discusses how to establish solidarity and rapport in multilingual discourse. Examples from the main languages spoken in Singapore--English, Mandarin, Hokkien, and Teochew--are used. (Author/OD)

  10. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Operation and Maintenance of Nuclear Power Plants; NRC Regulatory Guide (RG) 1.84, Revision 35, “Design..., “Operation and Maintenance Code Case Acceptability, ASME OM Code” (March 2003); and the following ASME Code... Operation and Maintenance of Nuclear Power Plants, ASME Code Case N-722-1, ASME Code Case N-729-1, and...

  11. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q. . Plasma Physics Lab.); Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L. )

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  12. Parallelization of the Legendre Transform for a Geodynamics Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.; Heien, E. M.

    2014-12-01

    Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of Earth. The code has been shown to scale well on computer clusters capable of computing at the order of millions of core hours. Depending on the resolution and time requirements, simulations may require weeks to years of clock time for specific target problems. A significant portion of the code execution time is spent transforming computed quantities between physical values and spherical harmonic coefficients, equivalent to a series of linear algebra operations. Intermixing C and Fortran code has opened the door to the parallel computing platform, Cuda and its associated libraries. We successfully implemented the parallelization of the scaling of the Legendre polynomials by both Schmidt Normalization coefficients, and a set of weighting coefficients; however, the expected speedup was not realized. Specifically, the original version of Calypso 1.1 computes the Legendre transform approximately four seconds faster than the Cuda-enabled modified version. By profiling the code, we determined that the time taken to transfer the data from host memory to GPU memory does not compare to the number of computations happening within the GPU. Nevertheless, by utilizing techniques such as memory coalescing, cached memory, pinned memory, dynamic parallelism, asynchronous calls, and overlapped memory transfers with computations, the likelihood of a speedup increases. Moreover, ideally the generation of the Legendre polynomial coefficients, Schmidt Normalization Coefficients, and the set of weights should not only be parallelized but be computed on-the-fly within the GPU. The end result is that we reduce the number of memory transfers from host to GPU, increase the number of parallelized computations on the GPU, and decrease the number of serial computations on the CPU. Also, the time taken to transform physical values to spherical

  13. CTCN: Colloid transport code -- nuclear; A user`s manual

    SciTech Connect

    Jain, R.

    1993-09-01

    This report describes the CTCN computer code, designed to solve the equations of transient colloidal transport of radionuclides in porous and fractured media. This Fortran 77 package solves systems of coupled nonlinear differential-algebraic equations with a wide range of boundary conditions. The package uses the Method of Lines technique with a special section which forms finite-difference discretizations in up to four spatial dimensions to automatically convert the system into a set of ordinary differential equations. The CTCN code then solves these equations using a robust, efficient ODE solver. Thus CTCN can be used to solve population balance equations along with the usual transport equations to model colloid transport processes or as a general problem solver to treat up to four-dimensional differential-algebraic systems.

  14. Nexus: A modular workflow management system for quantum simulation codes

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  15. Nexus: a modular workflow management system for quantum simulation codes

    SciTech Connect

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  16. Nexus: a modular workflow management system for quantum simulation codes

    DOE PAGES

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less

  17. CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  18. CHOLLA: A NEW MASSIVELY PARALLEL HYDRODYNAMICS CODE FOR ASTROPHYSICAL SIMULATION

    SciTech Connect

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-15

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳256{sup 3}) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  19. A domain decomposition scheme for Eulerian shock physics codes

    SciTech Connect

    Bell, R.L.; Hertel, E.S. Jr.

    1994-08-01

    A new algorithm which allows for complex domain decomposition in Eulerian codes was developed at Sandia National Laboratories. This new feature allows a user to customize the zoning for each portion of a calculation and to refine volumes of the computational space of particular interest This option is available in one, two, and three dimensions. The new technique will be described in detail and several examples of the effectiveness of this technique will also be discussed.

  20. A chemical reaction network solver for the astrophysics code NIRVANA

    NASA Astrophysics Data System (ADS)

    Ziegler, U.

    2016-02-01

    Context. Chemistry often plays an important role in astrophysical gases. It regulates thermal properties by changing species abundances and via ionization processes. This way, time-dependent cooling mechanisms and other chemistry-related energy sources can have a profound influence on the dynamical evolution of an astrophysical system. Modeling those effects with the underlying chemical kinetics in realistic magneto-gasdynamical simulations provide the basis for a better link to observations. Aims: The present work describes the implementation of a chemical reaction network solver into the magneto-gasdynamical code NIRVANA. For this purpose a multispecies structure is installed, and a new module for evolving the rate equations of chemical kinetics is developed and coupled to the dynamical part of the code. A small chemical network for a hydrogen-helium plasma was constructed including associated thermal processes which is used in test problems. Methods: Evolving a chemical network within time-dependent simulations requires the additional solution of a set of coupled advection-reaction equations for species and gas temperature. Second-order Strang-splitting is used to separate the advection part from the reaction part. The ordinary differential equation (ODE) system representing the reaction part is solved with a fourth-order generalized Runge-Kutta method applicable for stiff systems inherent to astrochemistry. Results: A series of tests was performed in order to check the correctness of numerical and technical implementation. Tests include well-known stiff ODE problems from the mathematical literature in order to confirm accuracy properties of the solver used as well as problems combining gasdynamics and chemistry. Overall, very satisfactory results are achieved. Conclusions: The NIRVANA code is now ready to handle astrochemical processes in time-dependent simulations. An easy-to-use interface allows implementation of complex networks including thermal processes

  1. On a stochastic approach to a code performance estimation

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.

    2016-06-01

    The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.

  2. ACDOS3: a further improved neutron dose-rate code

    SciTech Connect

    Martin, C.S.

    1982-07-01

    ACD0S3 is a computer code designed primarily to calculate the activities and dose rates produced by neutron activation in a variety of simple geometries. Neutron fluxes, in up to 50 groups and with energies up to 20 MeV, must be supplied as part of the input data. The neutron-source strength must also be supplied, or alternately, the code will compute it from neutral-beam operating parameters in the case where the source is a fusion-reactor injector. ACD0S3 differs from the previous version ACD0S2 in that additional geometries have been added, the neutron cross-section library has been updated, an estimate of the energy deposited by neutron reactions has been provided, and a significant increase in efficiency in reading the data libraries has been incorporated.

  3. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  4. RHALE: A 3-D MMALE code for unstructured grids

    SciTech Connect

    Peery, J.S.; Budge, K.G.; Wong, M.K.W.; Trucano, T.G.

    1993-08-01

    This paper describes RHALE, a multi-material arbitrary Lagrangian-Eulerian (MMALE) shock physics code. RHALE is the successor to CTH, Sandia`s 3-D Eulerian shock physics code, and will be capable of solving problems that CTH cannot adequately address. We discuss the Lagrangian solid mechanics capabilities of RHALE, which include arbitrary mesh connectivity, superior artificial viscosity, and improved material models. We discuss the MMALE algorithms that have been extended for arbitrary grids in both two- and three-dimensions. The MMALE addition to RHALE provides the accuracy of a Lagrangian code while allowing a calculation to proceed under very large material distortions. Coupling an arbitrary quadrilateral or hexahedral grid to the MMALE solution facilitates modeling of complex shapes with a greatly reduced number of computational cells. RHALE allows regions of a problem to be modeled with Lagrangian, Eulerian or ALE meshes. In addition, regions can switch from Lagrangian to ALE to Eulerian based on user input or mesh distortion. For ALE meshes, new node locations are determined with a variety of element based equipotential schemes. Element quantities are advected with donor, van Leer, or Super-B algorithms. Nodal quantities are advected with the second order SHALE or HIS algorithms. Material interfaces are determined with a modified Young`s high resolution interface tracker or the SLIC algorithm. RHALE has been used to model many problems of interest to the mechanics, hypervelocity impact, and shock physics communities. Results of a sampling of these problems are presented in this paper.

  5. A more accurate nonequilibrium air radiation code - NEQAIR second generation

    NASA Technical Reports Server (NTRS)

    Moreau, Stephane; Laux, Christophe O.; Chapman, Dean R.; Maccormack, Robert W.

    1992-01-01

    Two experiments, one an equilibrium flow in a plasma torch at Stanford, the other a nonequilibrium flow in a SDIO/IST Bow-Shock-Ultra-Violet missile flight, have provided the basis for modifying, enhancing, and testing the well-known radiation code, NEQAIR. The original code, herein termed NEQAIR1, lacked computational efficiency, accurate data for some species and the flexibility to handle a variety of species. The modified code, herein termed NEQAIR2, incorporates recent findings in the spectroscopic and radiation models. It can handle any number of species and radiative bands in a gas whose thermodynamic state can be described by up to four temperatures. It provides a new capability of computing very fine spectra in a reasonable CPU time, while including transport phenomena along the line of sight and the characteristics of instruments that were used in the measurements. Such a new tool should allow more accurate testing and diagnosis of the different physical models used in numerical simulations of radiating, low density, high energy flows.

  6. A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics

    SciTech Connect

    Johnson, Jeffrey

    2006-04-01

    Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Because it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.

  7. A two-phase code for protoplanetary disks

    NASA Astrophysics Data System (ADS)

    Inaba, S.; Barge, P.; Daniel, E.; Guillard, H.

    2005-02-01

    A high accuracy 2D hydrodynamical code has been developed to simulate the flow of gas and solid particles in protoplanetary disks. Gas is considered as a compressible fluid while solid particles, fully coupled to the gas by aerodynamical forces, are treated as a pressure-free diluted second phase. The solid particles lose energy and angular momentum which are transfered to the gas. As a result particles migrate inward toward the star and gas moves outward. High accuracy is necessary to account for the coupling. Boundary conditions must account for the inward/outward motions of the two phases. The code has been tested on one and two dimensional situations. The numerical results were compared with analytical solutions in three different cases: i) the disk is composed of a single gas component; ii) solid particles migrate in a steady flow of gas; iii) gas and solid particles evolve simultaneously. The code can easily reproduce known analytical solutions and is a powerful tool to study planetary formation at the decoupling stage. For example, the evolution of an over-density in the radial distribution of solids is found to differ significantly from the case where no back reaction of the particles onto the gas is assumed. Inside the bump, solid particles have a drift velocity approximately 16 times smaller than outside which significantly increases the residence time of the particles in the nebula. This opens some interesting perspectives to solve the timescale problem for the formation of planetesimals.

  8. SPECE: a code for Electron Cyclotron Emission in tokamaks

    SciTech Connect

    Farina, D.; Figini, L.; Platania, P.; Sozzi, C.

    2008-03-12

    The code SPECE has been developed for the analysis of electron cyclotron emission (ECE) in a general tokamak equilibrium. The code solves the radiation transport equation along the ray trajectories in a tokamak plasma, in which magnetic equilibrium and plasma profiles are given either analytically or numerically, for a Maxwellian plasma or a non thermal plasma characterized by a distribution function that is the sum of drifting Maxwellian distributions. Ray trajectories are computed making use of the cold dispersion relation, while the absorption and emission coefficients are obtained solving the relevant fully relativistic dispersion relation valid at high electron temperature. The actual antenna pattern is simulated by means of a multi-rays calculation, and the spatial resolution of the ECE measurements is computed by means of an algorithm that takes properly into account the emission along each ray of the beam. Wall effects are introduced in the code by means of a heuristic model. Results of ECE simulations in a standard ITER scenario are presented.

  9. Modulation and coding used by a major satellite communications company

    NASA Technical Reports Server (NTRS)

    Renshaw, K. H.

    1992-01-01

    Hughes Communications Inc., is a major satellite communications company providing or planning to provide the full spectrum of services available on satellites. All of the current services use conventional modulation and coding techniques that were well known a decade or longer ago. However, the future mobile satellite service will use significantly more advanced techniques. JPL, under NASA sponsorship, has pioneered many of the techniques that will be used.

  10. A signature of neural coding at human perceptual limits

    PubMed Central

    Bays, Paul M.

    2016-01-01

    Simple visual features, such as orientation, are thought to be represented in the spiking of visual neurons using population codes. I show that optimal decoding of such activity predicts characteristic deviations from the normal distribution of errors at low gains. Examining human perception of orientation stimuli, I show that these predicted deviations are present at near-threshold levels of contrast. The findings may provide a neural-level explanation for the appearance of a threshold in perceptual awareness whereby stimuli are categorized as seen or unseen. As well as varying in error magnitude, perceptual judgments differ in certainty about what was observed. I demonstrate that variations in the total spiking activity of a neural population can account for the empirical relationship between subjective confidence and precision. These results establish population coding and decoding as the neural basis of perception and perceptual confidence. PMID:27604067

  11. Development of a massively parallel parachute performance prediction code

    SciTech Connect

    Peterson, C.W.; Strickland, J.H.; Wolfe, W.P.; Sundberg, W.D.; McBride, D.D.

    1997-04-01

    The Department of Energy has given Sandia full responsibility for the complete life cycle (cradle to grave) of all nuclear weapon parachutes. Sandia National Laboratories is initiating development of a complete numerical simulation of parachute performance, beginning with parachute deployment and continuing through inflation and steady state descent. The purpose of the parachute performance code is to predict the performance of stockpile weapon parachutes as these parachutes continue to age well beyond their intended service life. A new massively parallel computer will provide unprecedented speed and memory for solving this complex problem, and new software will be written to treat the coupled fluid, structure and trajectory calculations as part of a single code. Verification and validation experiments have been proposed to provide the necessary confidence in the computations.

  12. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  13. The genetic code as a periodic table: algebraic aspects.

    PubMed

    Bashford, J D; Jarvis, P D

    2000-01-01

    The systematics of indices of physico-chemical properties of codons and amino acids across the genetic code are examined. Using a simple numerical labelling scheme for nucleic acid bases, A=(-1,0), C=(0,-1), G=(0,1), U=(1,0), data can be fitted as low order polynomials of the six coordinates in the 64-dimensional codon weight space. The work confirms and extends the recent studies by Siemion et al. (1995. BioSystems 36, 231-238) of the conformational parameters. Fundamental patterns in the data such as codon periodicities, and related harmonics and reflection symmetries, are here associated with the structure of the set of basis monomials chosen for fitting. Results are plotted using the Siemion one-step mutation ring scheme, and variants thereof. The connections between the present work, and recent studies of the genetic code structure using dynamical symmetry algebras, are pointed out.

  14. nMHDust: A 4-Fluid Partially Ionized Dusty Plasma Code

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel

    2008-11-01

    nMHDust is a next generation 4-fluid partially ionized magnetized dusty plasma code, treating the inertial dynamics of dust, ion and neutral components. Coded in ANSI C, the numerical method is based on the MHDust 3-fluid fully ionized dusty plasma code. This code expands the features of the MHDust code to include ionization/recombination effects and the netCDF data format. Tests of this code include: ionization instabilities, wave mode propagation (electromagnetic and acoustic), shear-flow instabilities, and magnetic reconnection. Relevant parameters for the space environment are considered, allowing a comparison to be made with previous dusty plasma codes (MHDust and DENISIS). The utility of the code is expanded through the possibility of a small dust mass. This allows nMHDust to be used as a 2-ion plasma code. nMHDust completes the array of fluid dusty plasma codes available for numerical investigations into nonlinear phenomena in the field of astrophysical dusty plasmas.

  15. BLSTA: A boundary layer code for stability analysis

    NASA Technical Reports Server (NTRS)

    Wie, Yong-Sun

    1992-01-01

    A computer program is developed to solve the compressible, laminar boundary-layer equations for two-dimensional flow, axisymmetric flow, and quasi-three-dimensional flows including the flow along the plane of symmetry, flow along the leading-edge attachment line, and swept-wing flows with a conical flow approximation. The finite-difference numerical procedure used to solve the governing equations is second-order accurate. The flow over a wide range of speed, from subsonic to hypersonic speed with perfect gas assumption, can be calculated. Various wall boundary conditions, such as wall suction or blowing and hot or cold walls, can be applied. The results indicate that this boundary-layer code gives velocity and temperature profiles which are accurate, smooth, and continuous through the first and second normal derivatives. The code presented herein can be coupled with a stability analysis code and used to predict the onset of the boundary-layer transition which enables the assessment of the laminar flow control techniques. A user's manual is also included.

  16. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  17. Direct Calculations of Current Drive with a Full Wave Code

    NASA Astrophysics Data System (ADS)

    Wright, John C.; Phillips, Cynthia K.

    1997-11-01

    We have developed a current drive package that evaluates the current driven by fast magnetosonic waves in arbitrary flux geometry. An expression for the quasilinear flux has been derived which accounts for coupling between modes in the spectrum of waves launched from the antenna. The field amplitudes are calculated in the full wave code, FISIC, and the current response function, \\chi, also known as the Spitzer function, is determined with Charles Karney's Fokker-Planck code, adj.f. Both codes have been modified to incorporate the same numerical equilibria. To model the effects of a trapped particle population, the bounce averaged equations for current and power are used, and the bounce averaged flux is calculated. The computer model is benchmarked against the homogenous equations for a high aspect ratio case in which the expected agreement is confirmed. Results from cases for TFTR, NSTX and CDX-U are contrasted with the predictions of the Ehst-Karney parameterization of current drive for circular equilibria. For theoretical background, please see the authors' <A HREF=http://w3.pppl.gov/ jwright/Publications>archiveA> of papers. (http://w3.pppl.gov/ ~jwright/Publications)

  18. A new neutron energy spectrum unfolding code using a two steps genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Hosseini, S. A.; Sohrabpour, M.

    2016-03-01

    A new neutron spectrum unfolding code TGASU (Two-steps Genetic Algorithm Spectrum Unfolding) has been developed to unfold the neutron spectrum from a pulse height distribution which was calculated using the MCNPX-ESUT computational Monte Carlo code. To perform the unfolding process, the response matrices were generated using the MCNPX-ESUT computational code. Both one step (common GA) and two steps GAs have been implemented to unfold the neutron spectra. According to the obtained results, the new two steps GA code results has shown closer match in all energy regions and particularly in the high energy regions. The results of the TGASU code have been compared with those of the standard spectra, LSQR method and GAMCD code. The results of the TGASU code have been demonstrated to be more accurate than that of the existing computational codes for both under-determined and over-determined problems.

  19. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  20. A Radiation Solver for the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    2015-01-01

    A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.

  1. A Fast Code for Jupiter Atmospheric Entry Analysis

    NASA Technical Reports Server (NTRS)

    Yauber, Michael E.; Wercinski, Paul; Yang, Lily; Chen, Yih-Kanq

    1999-01-01

    A fast code was developed to calculate the forebody heating environment and heat shielding that is required for Jupiter atmospheric entry probes. A carbon phenolic heat shield material was assumed and, since computational efficiency was a major goal, analytic expressions were used, primarily, to calculate the heating, ablation and the required insulation. The code was verified by comparison with flight measurements from the Galileo probe's entry. The calculation required 3.5 sec of CPU time on a work station, or three to four orders of magnitude less than for previous Jovian entry heat shields. The computed surface recessions from ablation were compared with the flight values at six body stations. The average, absolute, predicted difference in the recession was 13.7% too high. The forebody's mass loss was overpredicted by 5.3% and the heat shield mass was calculated to be 15% less than the probe's actual heat shield. However, the calculated heat shield mass did not include contingencies for the various uncertainties that must be considered in the design of probes. Therefore, the agreement with the Galileo probe's values was satisfactory in view of the code's fast running time and the methods' approximations.

  2. A novel theory on the origin of the genetic code: a GNC-SNS hypothesis.

    PubMed

    Ikehara, Kenji; Omori, Yoko; Arai, Rieko; Hirose, Akiko

    2002-04-01

    We have previously proposed an SNS hypothesis on the origin of the genetic code (Ikehara and Yoshida 1998). The hypothesis predicts that the universal genetic code originated from the SNS code composed of 16 codons and 10 amino acids (S and N mean G or C and either of four bases, respectively). But, it must have been very difficult to create the SNS code at one stroke in the beginning. Therefore, we searched for a simpler code than the SNS code, which could still encode water-soluble globular proteins with appropriate three-dimensional structures at a high probability using four conditions for globular protein formation (hydropathy, alpha-helix, beta-sheet, and beta-turn formations). Four amino acids (Gly [G], Ala [A], Asp [D], and Val [V]) encoded by the GNC code satisfied the four structural conditions well, but other codes in rows and columns in the universal genetic code table do not, except for the GNG code, a slightly modified form of the GNC code. Three three-amino acid systems ([D], Leu and Tyr; [D], Tyr and Met; Glu, Pro and Ile) also satisfied the above four conditions. But, some amino acids in the three systems are far more complex than those encoded by the GNC code. In addition, the amino acids in the three-amino acid systems are scattered in the universal genetic code table. Thus, we concluded that the universal genetic code originated not from a three-amino acid system but from a four-amino acid system, the GNC code encoding [GADV]-proteins, as the most primitive genetic code.

  3. Sharing code.

    PubMed

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  4. Evaluation of coded aperture radiation detectors using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur

    2016-12-01

    We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.

  5. DANTSYS: A diffusion accelerated neutral particle transport code system

    SciTech Connect

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  6. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  7. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel. [CONDOR code

    SciTech Connect

    Chen, K.F.; Olson, C.A.

    1983-09-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction.

  8. A Cooperative Downloading Method for VANET Using Distributed Fountain Code

    PubMed Central

    Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi

    2016-01-01

    Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate. PMID:27754339

  9. A temporal channel for information in sparse sensory coding

    PubMed Central

    Gupta, Nitin; Stopfer, Mark

    2014-01-01

    Summary Background Sparse codes are found in nearly every sensory system, but the role of spike timing in sparse sensory coding is unclear. Here we used the olfactory system of awake locusts to test whether the timing of spikes in Kenyon cells, a population of neurons that responds sparsely to odors, carries sensory information to, and influences the responses of, follower neurons. Results We characterized two major classes of direct followers of Kenyon cells. With paired intracellular and field potential recordings made during odor presentations, we found these followers contain information about odor identity in the temporal patterns of their spikes, rather than in the spike rate, the spike phase or the identities of the responsive neurons. Subtly manipulating the relative timing of Kenyon cell spikes with temporally and spatially structured microstimulation reliably altered the response patterns of the followers. Conclusions Our results show that even remarkably sparse spiking responses can provide information through stimulus-specific variations in timing on the order of tens to hundreds of milliseconds, and that these variations can determine the responses of downstream neurons. These results establish the importance of spike timing in a sparse sensory code. PMID:25264257

  10. A Cooperative Downloading Method for VANET Using Distributed Fountain Code.

    PubMed

    Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi

    2016-10-12

    Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate.

  11. SULEC: Benchmarking a new ALE finite-element code

    NASA Astrophysics Data System (ADS)

    Buiter, S.; Ellis, S.

    2012-04-01

    We have developed a 2-D/3-D arbitrary lagrangian-eulerian (ALE) finite-element code, SULEC, based on known techniques from literature. SULEC is successful in tackling many of the problems faced by numerical models of lithosphere and mantle processes, such as the combination of viscous, elastic, and plastic rheologies, the presence of a free surface, the contrast in viscosity between lithosphere and the underlying asthenosphere, and the occurrence of large deformations including viscous flow and offset on shear zones. The aim of our presentation is (1) to describe SULEC, and (2) to present a set of analytical and numerical benchmarks that we use to continuously test our code. SULEC solves the incompressible momentum equation coupled with the energy equation. It uses a structured mesh that is built of quadrilateral or brick elements that can vary in size in all dimensions, allowing to achieve high resolutions where required. The elements are either linear in velocity with constant pressure, or quadratic in velocity with linear pressure. An accurate pressure field is obtained through an iterative penalty (Uzawa) formulation. Material properties are carried on tracer particles that are advected through the Eulerian mesh. Shear elasticity is implemented following the approach of Moresi et al. [J. Comp. Phys. 184, 2003], brittle materials deform following a Drucker-Prager criterion, and viscous flow is by temperature- and pressure-dependent power-law creep. The top boundary of our models is a true free surface (with free surface stabilisation) on which simple surface processes models may be imposed. We use a set of benchmarks that test viscous, viscoelastic, elastic and plastic deformation, temperature advection and conduction, free surface behaviour, and pressure computation. Part of our benchmark set is automated allowing easy testing of new code versions. Examples include Poiseuille flow, Couette flow, Stokes flow, relaxation of viscous topography, viscous pure shear

  12. The "periodic table" of the genetic code: A new way to look at the code and the decoding process.

    PubMed

    Komar, Anton A

    2016-01-01

    Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.

  13. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  14. A robust CELP coder with source-dependent channel coding

    NASA Technical Reports Server (NTRS)

    Sukkar, Rafid A.; Kleijn, W. Bastiaan

    1990-01-01

    A CELP coder using Source Dependent Channel Encoding (SDCE) for optimal channel error protection is introduced. With SDCE, each of the CELP parameters are encoded by minimizing a perceptually meaningful error criterion under prevalent channel conditions. Unlike conventional channel coding schemes, SDCE allows for optimal balance between error detection and correction. The experimental results show that the CELP system is robust under various channel bit error rates and displays a graceful degradation in SSNR as the channel error rate increases. This is a desirable property to have in a coder since the exact channel conditions cannot usually be specified a priori.

  15. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  16. Development of a fan model for the CONTAIN code

    SciTech Connect

    Pevey, R.E.

    1987-01-08

    A fan model has been added to the CONTAIN code with a minimum of disruption of the standard CONTAIN calculation sequence. The user is required to supply a simple pressure vs. flow rate curve for each fan in his model configuration. Inclusion of the fan model required modification to two CONTAIN subroutines, IFLOW and EXEQNX. The two modified routines and the resulting executable module are located on the LANL mass storage system as /560007/iflow, /560007/exeqnx, and /560007/cont01, respectively. The model has been initially validated using a very simple sample problem and is ready for a more complete workout using the SRP reactor models from the RSRD probabilistic risk analysis.

  17. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  18. A LONE code for the sparse control of quantum systems

    NASA Astrophysics Data System (ADS)

    Ciaramella, G.; Borzì, A.

    2016-03-01

    In many applications with quantum spin systems, control functions with a sparse and pulse-shaped structure are often required. These controls can be obtained by solving quantum optimal control problems with L1-penalized cost functionals. In this paper, the MATLAB package LONE is presented aimed to solving L1-penalized optimal control problems governed by unitary-operator quantum spin models. This package implements a new strategy that includes a globalized semi-smooth Krylov-Newton scheme and a continuation procedure. Results of numerical experiments demonstrate the ability of the LONE code in computing accurate sparse optimal control solutions.

  19. A novel embedding technique for dirty paper trellis codes watermarking

    NASA Astrophysics Data System (ADS)

    Chaumont, Marc

    2010-01-01

    Dirty Paper Trellis Codes (DPTC) watermarking, published in 2004, is a very efficient high rate scheme. Nevertheless, it has two strong drawbacks: its security weakness and its CPU computation complexity. We propose an embedding space at least as secure and a faster embedding. The embedding space is built on the projections of some wavelet coefficients onto secret carriers. It keeps a good security level and has also good psycho-visual properties. The embedding is based on a dichotomous rotation in the Cox, Miller and Boom Plane. It gives better performances than previous fast embedding approaches. Four different attacks are performed and revealed good robustness and rapidity performances.

  20. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    DTIC Science & Technology

    2011-03-24

    diameter of focal blur for clear aperture number n m C (x, y ) Laplacian of Gaussian for image over x and y n/a F(2p, e) Fourier transform of image in...polar coordinates Ap and e n/a F (x) Fourier transform of x n/a it Focal length of lens m J; (p,e) Image in polar coordinates p and e m g...captures a Fourier transform of each image at various angles rather than low resolution images [38]. Multiple coded images have also been used, with

  1. Nyx: A MASSIVELY PARALLEL AMR CODE FOR COMPUTATIONAL COSMOLOGY

    SciTech Connect

    Almgren, Ann S.; Bell, John B.; Lijewski, Mike J.; Lukic, Zarija; Van Andel, Ethan

    2013-03-01

    We present a new N-body and gas dynamics code, called Nyx, for large-scale cosmological simulations. Nyx follows the temporal evolution of a system of discrete dark matter particles gravitationally coupled to an inviscid ideal fluid in an expanding universe. The gas is advanced in an Eulerian framework with block-structured adaptive mesh refinement; a particle-mesh scheme using the same grid hierarchy is used to solve for self-gravity and advance the particles. Computational results demonstrating the validation of Nyx on standard cosmological test problems, and the scaling behavior of Nyx to 50,000 cores, are presented.

  2. A System for Coding the Presenting Requests of Ambulatory Patients

    PubMed Central

    Weinstein, Philip; Gordon, Michael J.; Gilson, John S.

    1977-01-01

    Effective methods developed to review and study the care of patients in hospital have not been applicable to ambulatory care, in which definitive diagnosis is the exception rather than the rule. A reasonable alternative to using diagnosis as the basis for assessing ambulatory care is to use the problems or requests presented by the patients themselves. A system has been developed for classifying and coding this information for flexible computer retrieval. Testing indicates that the system is simple in design, easily mastered by nonphysicians and provides reliable, useful data at a low cost. PMID:855324

  3. Improving the Capabilities of a Continuum Laser Plasma Interaction Code

    SciTech Connect

    Hittinger, J F; Dorr, M R

    2006-06-15

    The numerical simulation of plasmas is a critical tool for inertial confinement fusion (ICF). We have been working to improve the predictive capability of a continuum laser plasma interaction code pF3d, which couples a continuum hydrodynamic model of an unmagnetized plasma to paraxial wave equations modeling the laser light. Advanced numerical techniques such as local mesh refinement, multigrid, and multifluid Godunov methods have been adapted and applied to nonlinear heat conduction and to multifluid plasma models. We describe these algorithms and briefly demonstrate their capabilities.

  4. Surface code error correction on a defective lattice

    NASA Astrophysics Data System (ADS)

    Nagayama, Shota; Fowler, Austin G.; Horsman, Dominic; Devitt, Simon J.; Van Meter, Rodney

    2017-02-01

    The yield of physical qubits fabricated in the laboratory is much lower than that of classical transistors in production semiconductor fabrication. Actual implementations of quantum computers will be susceptible to loss in the form of physically faulty qubits. Though these physical faults must negatively affect the computation, we can deal with them by adapting error-correction schemes. In this paper we have simulated statically placed single-fault lattices and lattices with randomly placed faults at functional qubit yields of 80%, 90%, and 95%, showing practical performance of a defective surface code by employing actual circuit constructions and realistic errors on every gate, including identity gates. We extend Stace et al's superplaquettes solution against dynamic losses for the surface code to handle static losses such as physically faulty qubits [1]. The single-fault analysis shows that a static loss at the periphery of the lattice has less negative effect than a static loss at the center. The randomly faulty analysis shows that 95% yield is good enough to build a large-scale quantum computer. The local gate error rate threshold is ∼ 0.3 % , and a code distance of seven suppresses the residual error rate below the original error rate at p=0.1 % . 90% yield is also good enough when we discard badly fabricated quantum computation chips, while 80% yield does not show enough error suppression even when discarding 90% of the chips. We evaluated several metrics for predicting chip performance, and found that the average of the product of the number of data qubits and the cycle time of a stabilizer measurement of stabilizers gave the strongest correlation with logical error rates. Our analysis will help with selecting usable quantum computation chips from among the pool of all fabricated chips.

  5. Home energy ratings and energy codes -- A marriage that should work

    SciTech Connect

    Verdict, M.E.; Fairey, P.W.; DeWein, M.C.

    1998-07-01

    This paper examines how voluntary home energy ratings systems (HERS) can be married to mandatory energy codes to increase code compliance while providing added benefits to consumers, builders, and code officials. Effective code enforcement and compliance is a common problem for state and local jurisdictions attempting to reduce energy consumption and increase housing affordability. Reasons frequently cited for energy code noncompliance are: (1) builder resistance to government regulations and change in building practices; (2) the perceived complexity of the code; (3) a lack of familiarity of energy impacts by cod officials and the housing industry, and (4) inadequate government resources for enforcement. By combing ratings and codes, one can create a win-win approach for code officials and energy rating organizations, the housing industry, as well as consumers who wish to reduce air pollution and energy waste. Additionally, state and local government experiences where the marriage between codes and ratings has begun are highlighted and the barriers and benefits assessed.

  6. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  7. pyro: A teaching code for computational astrophysical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, M.

    2014-10-01

    We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.

  8. ICAN: A versatile code for predicting composite properties

    NASA Technical Reports Server (NTRS)

    Ginty, C. A.; Chamis, C. C.

    1986-01-01

    The Integrated Composites ANalyzer (ICAN), a stand-alone computer code, incorporates micromechanics equations and laminate theory to analyze/design multilayered fiber composite structures. Procedures for both the implementation of new data in ICAN and the selection of appropriate measured data are summarized for: (1) composite systems subject to severe thermal environments; (2) woven fabric/cloth composites; and (3) the selection of new composite systems including those made from high strain-to-fracture fibers. The comparisons demonstrate the versatility of ICAN as a reliable method for determining composite properties suitable for preliminary design.

  9. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  10. Analyzing a School Dress Code in a Junior High School: A Set of Exercises.

    ERIC Educational Resources Information Center

    East, Maurice A.; And Others

    Five exercises based on a sample school dress code were designed from a political science perspective to help students develop skills in analyzing issues. The exercises are intended to be used in five or more class periods. In the first exercise, students read a sample dress code and name groups of people who might have opinions about it. In…

  11. ELLIPT2D: A Flexible Finite Element Code Written Python

    SciTech Connect

    Pletzer, A.; Mollis, J.C.

    2001-03-22

    The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research.

  12. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    SciTech Connect

    Thuc Bui

    2007-12-06

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  13. A Mutation Model from First Principles of the Genetic Code.

    PubMed

    Thorvaldsen, Steinar

    2016-01-01

    The paper presents a neutral Codons Probability Mutations (CPM) model of molecular evolution and genetic decay of an organism. The CPM model uses a Markov process with a 20-dimensional state space of probability distributions over amino acids. The transition matrix of the Markov process includes the mutation rate and those single point mutations compatible with the genetic code. This is an alternative to the standard Point Accepted Mutation (PAM) and BLOcks of amino acid SUbstitution Matrix (BLOSUM). Genetic decay is quantified as a similarity between the amino acid distribution of proteins from a (group of) species on one hand, and the equilibrium distribution of the Markov chain on the other. Amino acid data for the eukaryote, bacterium, and archaea families are used to illustrate how both the CPM and PAM models predict their genetic decay towards the equilibrium value of 1. A family of bacteria is studied in more detail. It is found that warm environment organisms on average have a higher degree of genetic decay compared to those species that live in cold environments. The paper addresses a new codon-based approach to quantify genetic decay due to single point mutations compatible with the genetic code. The present work may be seen as a first approach to use codon-based Markov models to study how genetic entropy increases with time in an effectively neutral biological regime. Various extensions of the model are also discussed.

  14. Barker code pulse compression with a large Doppler tolerance

    NASA Astrophysics Data System (ADS)

    Jiang, Xuefeng; Zhu, Zhaoda

    1991-03-01

    This paper discusses the application of least square approximate inverse filtering techniques to radar range sidelobe suppression. The method is illustrated by application to the design of a compensated noncoherent sidelobe suppression filter (SSF). The compensated noncoherent SSF of the 13-element Barker code has been found. The -40 kHz to 40 kHz Doppler tolerance of the filter is obtained under the conditions that the subpulse duration is equal to 0.7 microsec and the peak sidelobe level is less than -30 dB. Theoretical computations and experimental results indicate that the SSF implemented has much wider Doppler tolerance than the Rihaczek-Golden (1971) SSF.

  15. A computer code for performance of spur gears

    NASA Technical Reports Server (NTRS)

    Wang, K. L.; Cheng, H. S.

    1983-01-01

    In spur gears both performance and failure predictions are known to be strongly dependent on the variation of load, lubricant film thickness, and total flash or contact temperature of the contacting point as it moves along the contact path. The need of an accurate tool for predicting these variables has prompted the development of a computer code based on recent findings in EHL and on finite element methods. The analyses and some typical results which to illustrate effects of gear geometry, velocity, load, lubricant viscosity, and surface convective heat transfer coefficient on the performance of spur gears are analyzed.

  16. BERTHA: A versatile transmission line and circuit code

    NASA Astrophysics Data System (ADS)

    Hinshelwood, D. D.

    1983-11-01

    An improved version of the NRL transmission line code of W. H. Lupton is presented. The capabilities of the original program were extended to allow magnetically insulated transmission lines, plasma opening switches, imploding plasma loads and discrete element electrical networks, for example, to be modeled. BERTHA is used to simulate any system that is represented by a configuration of transmission line elements. The electrical behavior of the system is calculated by repeatedly summing the reflected and transmitted waves at the ends of each element. This program is versatile, easy to use and easily implemented on desktop microcomputers.

  17. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Palumbo, A.; Herman, M.; Capote, R.

    2014-06-01

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  18. The movement towards a more experimental approach to problem solving in mathematics using coding

    NASA Astrophysics Data System (ADS)

    Barichello, Leonardo

    2016-07-01

    Motivated by a problem proposed in a coding competition for secondary students, I will show on this paper how coding substantially changed the problem-solving process towards a more experimental approach.

  19. A Network Coding Based Routing Protocol for Underwater Sensor Networks

    PubMed Central

    Wu, Huayang; Chen, Min; Guan, Xin

    2012-01-01

    Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime. PMID:22666045

  20. Composing Data Parallel Code for a SPARQL Graph Engine

    SciTech Connect

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste; Haglin, David J.; Feo, John

    2013-09-08

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basic graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.

  1. A memristive spiking neuron with firing rate coding

    PubMed Central

    Ignatov, Marina; Ziegler, Martin; Hansen, Mirko; Petraru, Adrian; Kohlstedt, Hermann

    2015-01-01

    Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2) and on the chemical electromigration cell Ag/TiO2−x/Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926. PMID:26539074

  2. CFD and Neutron codes coupling on a computational platform

    NASA Astrophysics Data System (ADS)

    Cerroni, D.; Da Vià, R.; Manservisi, S.; Menghini, F.; Scardovelli, R.

    2017-01-01

    In this work we investigate the thermal-hydraulics behavior of a PWR nuclear reactor core, evaluating the power generation distribution taking into account the local temperature field. The temperature field, evaluated using a self-developed CFD module, is exchanged with a neutron code, DONJON-DRAGON, which updates the macroscopic cross sections and evaluates the new neutron flux. From the updated neutron flux the new peak factor is evaluated and the new temperature field is computed. The exchange of data between the two codes is obtained thanks to their inclusion into the computational platform SALOME, an open-source tools developed by the collaborative project NURESAFE. The numerical libraries MEDmem, included into the SALOME platform, are used in this work, for the projection of computational fields from one problem to another. The two problems are driven by a common supervisor that can access to the computational fields of both systems, in every time step, the temperature field, is extracted from the CFD problem and set into the neutron problem. After this iteration the new power peak factor is projected back into the CFD problem and the new time step can be computed. Several computational examples, where both neutron and thermal-hydraulics quantities are parametrized, are finally reported in this work.

  3. A general, recursive, and open-ended response code.

    PubMed

    Ringholm, Magnus; Jonsson, Dan; Ruud, Kenneth

    2014-03-30

    We present a new implementation of a recent open-ended response theory formulation for time- and perturbation-dependent basis sets (Thorvaldsen et al., J. Chem. Phys. 2008, 129, 214108) at the Hartree-Fock and density functional levels of theory. A novel feature of the new implementation is the use of recursive programming techniques, making it possible to write highly compact code for the analytic calculation of any response property at any valid choice of rule for the order of perturbation at which to include perturbed density matrices. The formalism is expressed in terms of the density matrix in the atomic orbital basis, allowing the recursive scheme presented here to be used in linear-scaling formulations of response theory as well as with two- and four-component relativistic wave functions. To demonstrate the new code, we present calculations of the third geometrical derivatives of the frequency-dependent second hyperpolarizability for HSOH at the Hartree-Fock level of theory, a seventh-order energy derivative involving basis sets that are both time and perturbation dependent.

  4. A role for non-coding variation in schizophrenia

    PubMed Central

    Roussos, Panos; Mitchell, Amanda C.; Voloudakis, Georgios; Fullard, John F.; Pothula, Venu M.; Tsang, Jonathan; Stahl, Eli A.; Georgakopoulos, Anastasios; Ruderfer, Douglas M.; Charney, Alexander; Okada, Yukinori; Siminovitch, Katherine A.; Worthington, Jane; Padyukov, Leonid; Klareskog, Lars; Gregersen, Peter K.; Plenge, Robert M.; Raychaudhuri, Soumya; Fromer, Menachem; Purcell, Shaun M.; Brennand, Kristen J.; Robakis, Nikolaos K.; Schadt, Eric E.; Akbarian, Schahram; Sklar, Pamela

    2014-01-01

    SUMMARY A large portion of common variant loci associated with genetic risk for schizophrenia reside within non-coding sequence of unknown function. Here, we demonstrate promoter and enhancer enrichment in schizophrenia variants associated with expression quantitative trait loci (eQTL). The enrichment is greater when functional annotations derived from human brain are used relative to peripheral tissues. Regulatory trait concordance analysis ranked genes within schizophrenia genome-wide significant loci for a potential functional role, based on co-localization of a risk SNP, eQTL and regulatory element sequence. We identified potential physical interactions of non-contiguous proximal and distal regulatory elements. This was verified in prefrontal cortex and induced pluripotent stem cell-derived neurons for the L-type calcium channel (CACNA1C) risk locus. Our findings point to a functional link between schizophrenia-associated non-coding SNPs and 3-dimensional genome architecture associated with chromosomal loopings and transcriptional regulation in the brain. PMID:25453756

  5. FARGO3D: A NEW GPU-ORIENTED MHD CODE

    SciTech Connect

    Benitez-Llambay, Pablo; Masset, Frédéric S. E-mail: masset@icf.unam.mx

    2016-03-15

    We present the FARGO3D code, recently publicly released. It is a magnetohydrodynamics code developed with special emphasis on the physics of protoplanetary disks and planet–disk interactions, and parallelized with MPI. The hydrodynamics algorithms are based on finite-difference upwind, dimensionally split methods. The magnetohydrodynamics algorithms consist of the constrained transport method to preserve the divergence-free property of the magnetic field to machine accuracy, coupled to a method of characteristics for the evaluation of electromotive forces and Lorentz forces. Orbital advection is implemented, and an N-body solver is included to simulate planets or stars interacting with the gas. We present our implementation in detail and present a number of widely known tests for comparison purposes. One strength of FARGO3D is that it can run on either graphical processing units (GPUs) or central processing units (CPUs), achieving large speed-up with respect to CPU cores. We describe our implementation choices, which allow a user with no prior knowledge of GPU programming to develop new routines for CPUs, and have them translated automatically for GPUs.

  6. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  7. System for loading executable code into volatile memory in a downhole tool

    DOEpatents

    Hall, David R.; Bartholomew, David B.; Johnson, Monte L.

    2007-09-25

    A system for loading an executable code into volatile memory in a downhole tool string component comprises a surface control unit comprising executable code. An integrated downhole network comprises data transmission elements in communication with the surface control unit and the volatile memory. The executable code, stored in the surface control unit, is not permanently stored in the downhole tool string component. In a preferred embodiment of the present invention, the downhole tool string component comprises boot memory. In another embodiment, the executable code is an operating system executable code. Preferably, the volatile memory comprises random access memory (RAM). A method for loading executable code to volatile memory in a downhole tool string component comprises sending the code from the surface control unit to a processor in the downhole tool string component over the network. A central processing unit writes the executable code in the volatile memory.

  8. CANTATAdb: A Collection of Plant Long Non-Coding RNAs

    PubMed Central

    Szcześniak, Michał W.; Rosikiewicz, Wojciech; Makałowska, Izabela

    2016-01-01

    Long non-coding RNAs (lncRNAs) represent a class of potent regulators of gene expression that are found in a wide array of eukaryotes; however, our knowledge about these molecules in plants is still very limited. In particular, a number of model plant species still lack comprehensive data sets of lncRNAs and their annotations, and very little is known about their biological roles. To meet these shortcomings, we created an online database of lncRNAs in 10 model plant species. The lncRNAs were identified computationally using dozens of publicly available RNA sequencing (RNA-Seq) libraries. Expression values, coding potential, sequence alignments as well as other types of data provide annotation for the identified lncRNAs. In order to better characterize them, we investigated their potential roles in splicing modulation and deregulation of microRNA functions. The data are freely available for searching, browsing and downloading from an online database called CANTATAdb (http://cantata.amu.edu.pl, http://yeti.amu.edu.pl/CANTATA/). PMID:26657895

  9. KORC: A Kinetic Orbit Runaway Electrons code for tokamak disruptions

    NASA Astrophysics Data System (ADS)

    Carbajal Gomez, Leopoldo; Del-Castillo-Negrete, Diego; Spong, Donald; Seal, Sudip; Baylor, Larry

    2016-10-01

    Runaway electrons (RE) resulting from the violent termination of tokamak plasmas pose a serious threat to ITER due to the very high energies they can reach and deposit on the plasma facing components. Most of the current modelling of RE in fusion tokamak plasmas rely on reduced models such as the bounce-average and the test particle equations. In some scenarios, the radiation losses in these models might lead to uncertainties in the RE parameters that determine their confinement and energy limit. In order to study this in detail we have developed a new Kinetic Orbit Runaway electrons Code (KORC). KORC follows the dynamics of ensembles of relativistic electrons in the 6D phase space fully resolving gyro-motion under the influence of the Lorentz force, the Landau-Lifshiftz consistent formulation of the Abraham-Lorentz-Dirac force for radiation damping, and collisions with impurities and the background plasma. KORC is parallelized using open MP/MPI, and benefits from a modified relativistic leap-frog method along with an operator splitting scheme for solving the RE dynamics in different magnetic fields. The code is robust, conservative, and shows nearly linear strong scaling. Research sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U. S. Department of Energy.

  10. HD Photo: a new image coding technology for digital photography

    NASA Astrophysics Data System (ADS)

    Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.

    2007-09-01

    This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.

  11. FRINK - A Code to Evaluate Space Reactor Transients

    SciTech Connect

    Poston, David I.; Marcille, Thomas F.; Dixon, David D.; Amiri, Benjamin W.

    2007-01-30

    One of the biggest needs for space reactor design and development is detailed system modeling. Most proposed space fission systems are very different from previously operated fission power systems, and extensive testing and modeling will be required to demonstrate integrated system performance. There are also some aspects of space reactors that make them unique from most terrestrial application, and require different modeling approaches. The Fission Reactor Integrated Nuclear Kinetics (FRINK) code was developed to evaluate simplified space reactor transients (note: the term ''space reactor'' inherently includes planetary and lunar surface reactors). FRINK is an integrated point kinetic/thermal-hydraulic transient analysis FORTRAN code - ''integrated'' refers to the simultaneous solution of the thermal and neutronic equations. In its current state FRINK is a very simple system model, perhaps better referred to as a reactor model. The ''system'' only extends to the primary loop power removal boundary condition; however this allows the simulation of simplified transients (e.g. loss of primary heat sink, loss of flow, large reactivity insertion, etc.), which are most important in bounding early system conceptual design. FRINK could then be added to a complete system model later in the design and development process as system design matures.

  12. ROAR: A 3-D tethered rocket simulation code

    SciTech Connect

    York, A.R. II; Ludwigsen, J.S.

    1992-04-01

    A high-velocity impact testing technique, utilizing a tethered rocket, is being developed at Sandia National Laboratories. The technique involves tethering a rocket assembly to a pivot location and flying it in a semicircular trajectory to deliver the rocket and payload to an impact target location. Integral to developing this testing technique is the parallel development of accurate simulation models. An operational computer code, called ROAR (Rocket-on-a-Rope), has been developed to simulate the three-dimensional transient dynamic behavior of the tether and motor/payload assembly. This report presents a discussion of the parameters modeled, the governing set of equations, the through-time integration scheme, and the input required to set up a model. Also included is a sample problem and a comparison with experimental results.

  13. Equilibrium and stability code for a diffuse plasma

    PubMed Central

    Betancourt, Octavio; Garabedian, Paul

    1976-01-01

    A computer code to investigate the equilibrium and stability of a diffuse plasma in three dimensions is described that generalizes earlier work on a sharp free boundary model. Toroidal equilibria of a plasma are determined by considering paths of steepest descent associated with a new version of the variational principle of magnetohydrodynamics that involves mapping a fixed coordinate domain onto the plasma. A discrete approximation of the potential energy is written down following the finite element method, and the resulting expression is minimized with respect to the values of the mapping at points of a rectangular grid. If a relative minimum of the discrete analogue of the energy is attained, the corresponding equilibrium is considered to be stable. PMID:16592310

  14. A code of ethics for European health librarians: the points of departure.

    PubMed

    McSeán, T; Tsafrir, J

    1995-06-01

    Codes of ethics are a classic mark of a profession, and their preparation is an important part of the work expected of a professional organization. EAHIL has recently embarked on drafting a code for European health librarians. This paper explains the background to EAHIL's decision, and reviews existing codes of ethical practice in the fields of both medicine and library and information work.

  15. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  16. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  17. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  18. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Crab Delivery Condition Codes 3a Table 3a to Part 680 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code...

  19. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 11 2011-10-01 2011-10-01 false Crab Delivery Condition Codes 3a Table 3a to Part 680 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code...

  20. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.

    1977-01-01

    An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.

  1. Reasoning with Computer Code: a new Mathematical Logic

    NASA Astrophysics Data System (ADS)

    Pissanetzky, Sergio

    2013-01-01

    A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self

  2. Is subjective duration a signature of coding efficiency?

    PubMed Central

    Eagleman, David M.; Pariyadath, Vani

    2009-01-01

    Perceived duration is conventionally assumed to correspond with objective duration, but a growing literature suggests a more complex picture. For example, repeated stimuli appear briefer in duration than a novel stimulus of equal physical duration. We suggest that such duration illusions appear to parallel the neural phenomenon of repetition suppression, and we marshal evidence for a new hypothesis: the experience of duration is a signature of the amount of energy expended in representing a stimulus, i.e. the coding efficiency. This novel hypothesis offers a unified explanation for almost a dozen illusions in the literature in which subjective duration is modulated by properties of the stimulus such as size, brightness, motion and rate of flicker. PMID:19487187

  3. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  4. Different measurements with a SAW convolver and description of a code generator (PRBS) generator

    NASA Astrophysics Data System (ADS)

    Gaellstedt, O.; Ringoe, U.; Wallden, R.

    1982-03-01

    Measurements with a surface acoustic wave piezoelectric monolithic convolver, which allows the correlation of codes up to 1000 or more chips in length at chip rates from a few MHz to over 100 MHz with essentially instantaneous programmability are described. Most of the measurements were made using a pseudo random binary sequence (PRBS) code generator which can produce two maximum length codes, the one opposite-in-time to the other. Operating instructions and a literature survey are given.

  5. Creating a code of conduct to enable organizational change.

    PubMed

    Marr, Jo-Anne; Sanders, Gail; Neil, Ann; Murphy, Lisa

    2006-07-25

    This article provides an overview of the elements that were involved in developing a comprehensive change management strategy at a large, multi-site laboratory as it readied for a major environmental change in the clinical laboratory. A multi-year strategic and tactical plan was created based on an organization-wide employee satisfaction survey, an environmental survey, business needs, and internal/external pressures. The goal was to build a collaborative, team-oriented, and respectful place to work, and to ensure that the workplace became a vibrant environment focused on both individual growth and business improvement and efficiency. This article presents the strategy and outcome of the change, including external and internal environmental influences, challenges, and successes. It also discusses the dissemination of information and engagement of staff, clarification of roles and responsibilities, code of conduct, career laddering and career self-management, and a review of deficiencies impacting efficiency and productivity.

  6. A predictive coding account of MMN reduction in schizophrenia.

    PubMed

    Wacongne, Catherine

    2016-04-01

    The mismatch negativity (MMN) is thought to be an index of the automatic activation of a specialized network for active prediction and deviance detection in the auditory cortex. It is consistently reduced in schizophrenic patients and has received a lot of interest as a clinical and translational tool. The main neuronal hypothesis regarding the mechanisms leading to a reduced MMN in schizophrenic patients is a dysfunction of NMDA receptors (NMDA-R). However, this hypothesis has never been implemented in a neuronal model. In this paper, we examine the consequences of NMDA-R dysfunction in a neuronal model of MMN based on predictive coding principle. I also investigate how predictive processes may interact with synaptic adaptation in MMN generations and examine the consequences of this interaction for the use of MMN paradigms in schizophrenia research.

  7. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  8. 50 CFR Table 1a to Part 679 - Delivery Condition* and Product Codes

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Delivery Condition* and Product Codes 1a... ALASKA Pt. 679, Table 1a Table 1a to Part 679—Delivery Condition* and Product Codes Description Code... Stomachs. Includes all internal organs (ancillary only) 35 Surimi. Paste from fish flesh and additives...

  9. User's guide for a flat wake rotor inflow/wake velocity prediction code, DOWN

    NASA Technical Reports Server (NTRS)

    Wilson, John C.

    1991-01-01

    A computer code named DOWN was created to implement a flat wake theory for the calculation of rotor inflow and wake velocities. A brief description of the code methodology and instructions for its use are given. The code will be available from NASA's Computer Software Management and Information Center (COSMIC).

  10. DYNAVAC: a transient-vacuum-network analysis code

    SciTech Connect

    Deis, G.A.

    1980-07-08

    This report discusses the structure and use of the program DYNAVAC, a new transient-vacuum-network analysis code implemented on the NMFECC CDC-7600 computer. DYNAVAC solves for the transient pressures in a network of up to twenty lumped volumes, interconnected in any configuration by specified conductances. Each volume can have an internal gas source, a pumping speed, and any initial pressure. The gas-source rates can vary with time in any piecewise-linear manner, and up to twenty different time variations can be included in a single problem. In addition, the pumping speed in each volume can vary with the total gas pumped in the volume, thus simulating the saturation of surface pumping. This report is intended to be both a general description and a user's manual for DYNAVAC.

  11. TRANS4: a computer code calculation of solid fuel penetration of a concrete barrier. [LMFBR; GCFR

    SciTech Connect

    Ono, C. M.; Kumar, R.; Fink, J. K.

    1980-07-01

    The computer code, TRANS4, models the melting and penetration of a solid barrier by a solid disc of fuel following a core disruptive accident. This computer code has been used to model fuel debris penetration of basalt, limestone concrete, basaltic concrete, and magnetite concrete. Sensitivity studies were performed to assess the importance of various properties on the rate of penetration. Comparisons were made with results from the GROWS II code.

  12. Chirality in a quaternionic representation of the genetic code.

    PubMed

    Manuel Carlevaro, C; Irastorza, Ramiro M; Vericat, Fernando

    2016-12-01

    A quaternionic representation of the genetic code, previously reported by the authors (BioSystems 141 (10-19), 2016), is updated in order to incorporate chirality of nucleotide bases and amino acids. The original representation associates with each nucleotide base a prime integer quaternion of norm 7 and involves a function that assigns to each codon, represented by three of these quaternions, another integer quaternion (amino acid type quaternion). The assignation is such that the essentials of the standard genetic code (particularly its degeneration) are preserved. To show the advantages of such a quaternionic representation we have designed an algorithm to go from the primary to the tertiary structure of the protein. The algorithm uses, besides of the type quaternions, a second kind of quaternions with real components that we additionally associate with the amino acids according to their order along the proteins (order quaternions). In this context, we incorporate chirality in our representation by observing that the set of eight integer quaternions of norm 7 can be partitioned into a pair of subsets of cardinality four each with their elements mutually conjugate and by putting them into correspondence one to one with the two sets of enantiomers (D and L) of the four nucleotide bases adenine, cytosine, guanine and uracil, respectively. We then propose two diagrams in order to describe the hypothetical evolution of the genetic codes corresponding to both of the chiral systems of affinities: D-nucleotide bases/L-amino acids and L-nucleotide bases/D-amino acids at reading frames 5'→3' and 3'→5', respectively. Guided by these diagrams we define functions that in each case assign to the triplets of D- (L-) bases a L- (D-) amino acid type integer quaternion. Specifically, the integer quaternion associated with a given D-amino acid is the conjugate of that one corresponding to the enantiomer L. The chiral type quaternions obtained for the amino acids are used

  13. A Proxy Signature Scheme Based on Coding Theory

    NASA Astrophysics Data System (ADS)

    Jannati, Hoda; Falahati, Abolfazl

    Proxy signature helps the proxy signer to sign messages on behalf of the original signer. This signature is used when the original signer is not available to sign a specific document. In this paper, we introduce a new proxy signature scheme based on Stern's identification scheme whose security depends on syndrome decoding problem. The proposed scheme is the first code-based proxy signature and can be used in a quantum computer. In this scheme, the operations to perform are linear and very simple thus the signature is performed quickly and can be implemented using smart card in a quite efficient way. The proposed scheme also satisfies unforgeability, undeniability, non-transferability and distinguishability properties which are the security requirements for a proxy signature.

  14. The Penal Code (Amendment) Act 1989 (Act A727), 1989.

    PubMed

    1989-01-01

    In 1989, Malaysia amended its penal code to provide that inducing an abortion is not an offense if the procedure is performed by a registered medical practitioner who has determined that continuation of the pregnancy would risk the life of the woman or damage her mental or physical health. Additional amendments include a legal description of the conditions which constitute the act of rape. Among these conditions is intercourse with or without consent with a woman under the age of 16. Malaysia fails to recognize rape within a marriage unless the woman is protected from her husband by judicial decree or is living separately from her husband according to Muslim custom. Rape is punishable by imprisonment for a term of 5-20 years and by whipping.

  15. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block

  16. Ventral Pallidal Coding of a Learned Taste Aversion

    PubMed Central

    Itoga, Christy A.; Berridge, Kent C.; Aldridge, J. Wayne

    2016-01-01

    The hedonic value of a sweet food reward, or how much a taste is ‘liked’, has been suggested to be encoded by neuronal firing in the posterior ventral pallidum (VP). Hedonic impact can be altered by psychological manipulations, such as taste aversion conditioning, which can make an initially pleasant sweet taste become perceived as disgusting. Pairing nausea-inducing LiCl injection as a Pavlovian unconditioned stimulus (UCS) with a novel taste that is normally palatable as the predictive conditioned stimulus (CS+) suffices to induce a learned taste aversion that changes orofacial ‘liking’ responses to that sweet taste (e.g., lateral tongue protrusions) to ‘disgust’ reactions (e.g., gapes) in rats. We used two different sweet tastes of similar initial palatability (a sucrose solution and a polycose/saccharin solution, CS± assignment was counterbalanced across groups) to produce a discriminative conditioned aversion. Only one of those tastes (arbitrarily assigned and designated as CS+) was associatively paired with LiCl injections as UCS to form a conditioned aversion. The other taste (CS−) was paired with mere vehicle injections to remain relatively palatable as a control sweet taste. We recorded the neural activity in VP in response to each taste, before and after aversion training. We found that the safe and positively hedonic taste always elicited excitatory increases in firing rate of VP neurons. By contrast, aversion learning reversed the VP response to the ‘disgusting’ CS+ taste from initial excitation into a conditioned decrease in neuronal firing rate after training. Such neuronal coding of hedonic impact by VP circuitry may contribute both to normal pleasure and disgust, and disruptions of VP coding could result in affective disorders, addictions and eating disorders. PMID:26615907

  17. A user's guide to the PLTEMP/ANL code.

    SciTech Connect

    Kalimullah, M.

    2011-07-05

    PLTEMP/ANL V4.1 is a FORTRAN program that obtains a steady-state flow and temperature solution for a nuclear reactor core, or for a single fuel assembly. It is based on an evolutionary sequence of ''PLTEMP'' codes in use at ANL for the past 20 years. Fueled and non-fueled regions are modeled. Each fuel assembly consists of one or more plates or tubes separated by coolant channels. The fuel plates may have one to five layers of different materials, each with heat generation. The width of a fuel plate may be divided into multiple longitudinal stripes, each with its own axial power shape. The temperature solution is effectively 2-dimensional. It begins with a one-dimensional solution across all coolant channels and fuel plates/tubes within a given fuel assembly, at the entrance to the assembly. The temperature solution is repeated for each axial node along the length of the fuel assembly. The geometry may be either slab or radial, corresponding to fuel assemblies made of a series of flat (or slightly curved) plates, or of nested tubes. A variety of thermal-hydraulic correlations are available with which to determine safety margins such as Onset-of-Nucleate boiling (ONB), departure from nucleate boiling (DNB), and onset of flow instability (FI). Coolant properties for either light or heavy water are obtained from FORTRAN functions rather than from tables. The code is intended for thermal-hydraulic analysis of research reactor performance in the sub-cooled boiling regime. Both turbulent and laminar flow regimes can be modeled. Options to calculate both forced flow and natural circulation are available. A general search capability is available (Appendix XII) to greatly reduce the reactor analyst's time.

  18. A User's Guide to the PLTEMP/ANL Code

    SciTech Connect

    Olson, Arne P.; Kalimullah, M.

    2015-07-07

    PLTEMP/ANL V4.2 is a FORTRAN program that obtains a steady-state flow and temperature solution for a nuclear reactor core, or for a single fuel assembly. It is based on an evolutionary sequence of ''PLTEMP'' codes in use at ANL for the past 20 years. Fueled and non-fueled regions are modeled. Each fuel assembly consists of one or more plates or tubes separated by coolant channels. The fuel plates may have one to five layers of different materials, each with heat generation. The width of a fuel plate may be divided into multiple longitudinal stripes, each with its own axial power shape. The temperature solution is effectively 2-dimensional. It begins with a one-dimensional solution across all coolant channels and fuel plates/tubes within a given fuel assembly, at the entrance to the assembly. The temperature solution is repeated for each axial node along the length of the fuel assembly. The geometry may be either slab or radial, corresponding to fuel assemblies made of a series of flat (or slightly curved) plates, or of nested tubes. A variety of thermal-hydraulic correlations are available with which to determine safety margins such as Onset-of- Nucleate boiling (ONB), departure from nucleate boiling (DNB), and onset of flow instability (FI). Coolant properties for either light or heavy water are obtained from FORTRAN functions rather than from tables. The code is intended for thermal-hydraulic analysis of research reactor performance in the sub-cooled boiling regime. Both turbulent and laminar flow regimes can be modeled. Options to calculate both forced flow and natural circulation are available. A general search capability is available (Appendix XII) to greatly reduce the reactor analyst’s time.

  19. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... charge by writing the Mail and Messenger Services, U.S. Nuclear Regulatory Commission, Washington, DC... be applied to OM Code activities. (ii) Motor-Operated Valve testing. Licensees shall comply with the provisions for testing motor-operated valves in OM Code ISTC 4.2, 1995 Edition with the 1996 and 1997...

  20. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., Washington, DC 20555-0001; or by fax to (301) 415-2289; or by email to DISTRIBUTION.RESOURCE@nrc.gov. Copies... be applied to OM Code activities. (ii) Motor-Operated Valve testing. Licensees shall comply with the provisions for testing motor-operated valves in OM Code ISTC 4.2, 1995 Edition with the 1996 and 1997...

  1. A user's manual for the Loaded Microstrip Antenna Code (LMAC)

    NASA Technical Reports Server (NTRS)

    Forrai, D. P.; Newman, E. H.

    1988-01-01

    The use of the Loaded Microstrip Antenna Code is described. The geometry of this antenna is shown and its dimensions are described in terms of the program outputs. The READ statements for the inputs are detailed and typical values are given where applicable. The inputs of four example problems are displayed with the corresponding output of the code given in the appendices.

  2. Code-Switching: A Natural Phenomenon vs Language "Deficiency."

    ERIC Educational Resources Information Center

    Cheng, Li-Rong; Butler, Katharine

    1989-01-01

    Proposes that code switching (CS) and code mixing are natural phenomena that may result in increased competency in various communicative contexts. Both assets and deficits of CS are analyzed, and an ethnographic approach to the variable underlying CS is recommended. (32 references) (Author/VWL)

  3. A hippocampal network for spatial coding during immobility and sleep

    PubMed Central

    Kay, K.; Sosa, M.; Chung, J.E.; Karlsson, M.P.; Larkin, M.C.; Frank, L.M.

    2016-01-01

    How does an animal know where it is when it stops moving? Hippocampal place cells fire at discrete locations as subjects traverse space, thereby providing an explicit neural code for current location during locomotion. In contrast, during awake immobility, the hippocampus is thought to be dominated by neural firing representing past and possible future experience. The question of whether and how the hippocampus constructs a representation of current location in the absence of locomotion has stood unresolved. Here we report that a distinct population of hippocampal neurons, located in the CA2 subregion, signals current location during immobility, and furthermore does so in association with a previously unidentified hippocampus-wide network pattern. In addition, signaling of location persists into brief periods of desynchronization prevalent in slow-wave sleep. The hippocampus thus generates a distinct representation of current location during immobility, pointing to mnemonic processing specific to experience occurring in the absence of locomotion. PMID:26934224

  4. A computer code for beam dynamics simulations in SFRFQ structure

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Chen, J. E.; Lu, Y. R.; Yan, X. Q.; Zhu, K.; Fang, J. X.; Guo, Z. Y.

    2007-03-01

    A computer code (SFRFQCODEv1.0) is developed to analyze the beam dynamics of Separated Function Radio Frequency Quadruples (SFRFQ) structure. Calculations show that the transverse and longitudinal stability can be ensured by selecting proper dynamic and structure parameters. This paper describes the beam dynamical mechanism of SFRFQ, and presents a design example of SFRFQ cavity, which will be used as a post accelerator of a 26 MHz 1 MeV O + Integrated Split Ring (ISR) RFQ and accelerate O + from 1 to 1.5 MeV. Three electrostatic quadruples are adopted to realize the transverse beam matching from ISR RFQ to SFRFQ cavity. This setting is also useful for the beam size adjustment and its applications.

  5. Developing a code of ethics for human cloning.

    PubMed

    Collmann, J; Graber, G

    2000-01-01

    Under what conditions might the cloning of human beings constitute an ethical practice? A tendency exists to analyze human cloning merely as a technical procedure. As with all revolutionary technological developments, however, human cloning potentially exists in a broad social context that will both shape and be shaped by the biological techniques. Although human cloning must be subjected to technical analysis that addresses fundamental ethical questions such as its safety and efficacy, questions exist that focus our attention on broader issues. Asserting that cloning inevitably leads to undesirable consequences commits the fallacy of technological determinism and untenably separates technological and ethical evaluation. Drawing from the Report of the National Bioethics Advisory Committee and Aldous Huxley's Brave New World, we offer a draft "Code of Ethics for Human Cloning" in order to stimulate discussion about the ethics of the broader ramifications of human cloning as well as its particular technological properties.

  6. Error threshold for the surface code in a superohmic environment

    NASA Astrophysics Data System (ADS)

    Lopez-Delgado, Daniel A.; Novais, E.; Mucciolo, Eduardo R.; Caldeira, Amir O.

    Using the Keldysh formalism, we study the fidelity of a quantum memory over multiple quantum error correction cycles when the physical qubits interact with a bosonic bath at zero temperature. For encoding, we employ the surface code, which has one of the highest error thresholds in the case of stochastic and uncorrelated errors. The time evolution of the fidelity of the resulting two-dimensional system is cast into a statistical mechanics phase transition problem on a three-dimensional spin lattice, and the error threshold is determined by the critical temperature of the spin model. For superohmic baths, we find that time does not affect the error threshold: its value is the same for one or an arbitrary number of quantum error correction cycles. Financial support Fapesp, and CNPq (Brazil).

  7. A Software Safety Certification Plug-in for Automated Code Generators (Executive Briefing)

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Schumann, Johann; Greaves, Doug

    2006-01-01

    A viewgraph presentation describing a certification tool to check the safety of auto-generated codes is shown. The topics include: 1) Auto-generated Code at NASA; 2) Safety of Auto-generated Code; 3) Technical Approach; and 4) Project Plan.

  8. Regulations and Ethical Considerations for Astronomy Education Research III: A Suggested Code of Ethics

    ERIC Educational Resources Information Center

    Brogt, Erik; Foster, Tom; Dokter, Erin; Buxner, Sanlyn; Antonellis, Jessie

    2009-01-01

    We present an argument for, and suggested implementation of, a code of ethics for the astronomy education research community. This code of ethics is based on legal and ethical considerations set forth by U.S. federal regulations and the existing code of conduct of the American Educational Research Association. We also provide a fictitious research…

  9. Sharing code

    PubMed Central

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing. PMID:25165519

  10. RNAcentral: A comprehensive database of non-coding RNA sequences

    DOE PAGES

    Williams, Kelly Porter; Lau, Britney Yan

    2016-10-28

    RNAcentral is a database of non-coding RNA (ncRNA) sequences that aggregates data from specialised ncRNA resources and provides a single entry point for accessing ncRNA sequences of all ncRNA types from all organisms. Since its launch in 2014, RNAcentral has integrated twelve new resources, taking the total number of collaborating database to 22, and began importing new types of data, such as modified nucleotides from MODOMICS and PDB. We created new species-specific identifiers that refer to unique RNA sequences within a context of single species. Furthermore, the website has been subject to continuous improvements focusing on text and sequence similaritymore » searches as well as genome browsing functionality.« less

  11. A model code for the radiative theta pinch

    SciTech Connect

    Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.

    2014-07-15

    A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.

  12. Visualization of elastic wavefields computed with a finite difference code

    SciTech Connect

    Larsen, S.; Harris, D.

    1994-11-15

    The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.

  13. RNAcentral: a comprehensive database of non-coding RNA sequences

    PubMed Central

    2017-01-01

    RNAcentral is a database of non-coding RNA (ncRNA) sequences that aggregates data from specialised ncRNA resources and provides a single entry point for accessing ncRNA sequences of all ncRNA types from all organisms. Since its launch in 2014, RNAcentral has integrated twelve new resources, taking the total number of collaborating database to 22, and began importing new types of data, such as modified nucleotides from MODOMICS and PDB. We created new species-specific identifiers that refer to unique RNA sequences within a context of single species. The website has been subject to continuous improvements focusing on text and sequence similarity searches as well as genome browsing functionality. All RNAcentral data is provided for free and is available for browsing, bulk downloads, and programmatic access at http://rnacentral.org/. PMID:27794554

  14. Heparan sulfate proteoglycans: a sugar code for vertebrate development?

    PubMed Central

    Poulain, Fabienne E.; Yost, H. Joseph

    2015-01-01

    Heparan sulfate proteoglycans (HSPGs) have long been implicated in a wide range of cell-cell signaling and cell-matrix interactions, both in vitro and in vivo in invertebrate models. Although many of the genes that encode HSPG core proteins and the biosynthetic enzymes that generate and modify HSPG sugar chains have not yet been analyzed by genetics in vertebrates, recent studies have shown that HSPGs do indeed mediate a wide range of functions in early vertebrate development, for example during left-right patterning and in cardiovascular and neural development. Here, we provide a comprehensive overview of the various roles of HSPGs in these systems and explore the concept of an instructive heparan sulfate sugar code for modulating vertebrate development. PMID:26487777

  15. Heparan sulfate proteoglycans: a sugar code for vertebrate development?

    PubMed

    Poulain, Fabienne E; Yost, H Joseph

    2015-10-15

    Heparan sulfate proteoglycans (HSPGs) have long been implicated in a wide range of cell-cell signaling and cell-matrix interactions, both in vitro and in vivo in invertebrate models. Although many of the genes that encode HSPG core proteins and the biosynthetic enzymes that generate and modify HSPG sugar chains have not yet been analyzed by genetics in vertebrates, recent studies have shown that HSPGs do indeed mediate a wide range of functions in early vertebrate development, for example during left-right patterning and in cardiovascular and neural development. Here, we provide a comprehensive overview of the various roles of HSPGs in these systems and explore the concept of an instructive heparan sulfate sugar code for modulating vertebrate development.

  16. National Combustion Code: A Multidisciplinary Combustor Design System

    NASA Technical Reports Server (NTRS)

    Stubbs, Robert M.; Liu, Nan-Suey

    1997-01-01

    The Internal Fluid Mechanics Division conducts both basic research and technology, and system technology research for aerospace propulsion systems components. The research within the division, which is both computational and experimental, is aimed at improving fundamental understanding of flow physics in inlets, ducts, nozzles, turbomachinery, and combustors. This article and the following three articles highlight some of the work accomplished in 1996. A multidisciplinary combustor design system is critical for optimizing the combustor design process. Such a system should include sophisticated computer-aided design (CAD) tools for geometry creation, advanced mesh generators for creating solid model representations, a common framework for fluid flow and structural analyses, modern postprocessing tools, and parallel processing. The goal of the present effort is to develop some of the enabling technologies and to demonstrate their overall performance in an integrated system called the National Combustion Code.

  17. A Watermarking Scheme for High Efficiency Video Coding (HEVC)

    PubMed Central

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate. PMID:25144455

  18. RNAcentral: A comprehensive database of non-coding RNA sequences

    SciTech Connect

    Williams, Kelly Porter; Lau, Britney Yan

    2016-10-28

    RNAcentral is a database of non-coding RNA (ncRNA) sequences that aggregates data from specialised ncRNA resources and provides a single entry point for accessing ncRNA sequences of all ncRNA types from all organisms. Since its launch in 2014, RNAcentral has integrated twelve new resources, taking the total number of collaborating database to 22, and began importing new types of data, such as modified nucleotides from MODOMICS and PDB. We created new species-specific identifiers that refer to unique RNA sequences within a context of single species. Furthermore, the website has been subject to continuous improvements focusing on text and sequence similarity searches as well as genome browsing functionality.

  19. Helium trimer calculations with a public quantum three-body code

    SciTech Connect

    Kolganova, E. A.; Roudnev, V.; Cavagnero, M.

    2012-10-15

    We present an illustration of using a quantumthree-body code being prepared for public release. The code is based on iterative solving of the three-dimensional Faddeev equations. The code is easy to use and allows users to perform highly-accurate calculations of quantum three-body systems. The previously known results for He{sub 3} ground state are well reproduced by the code.

  20. A cascaded error control coding scheme for space and satellite communication

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    An error control coding scheme for space and satellite communications is presented. The scheme is attained by cascading two codes, the inner and outer codes. Error performance of the scheme is analyzed. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be achieved even for a high channel bit-error-rate. Several exmple schemes are studied. One of the example schemes is proposed to NASA for satellite or spacecraft downlink error control.

  1. Performance Analysis of a CDMA VSAT System With Convoltional and Reed-Solomon Coding

    DTIC Science & Technology

    2002-09-01

    Error Correction (FEC), Walsh codes and PN sequences are used to generate a CDMA system and FEC is used to further improve the performance. Convolutional and block coding methods are examined and the results are obtained for each different case, including concatenated use of the codes, The performance of the system is given in terms of Bit Error Rate (BER), As observed from the results, the performance is mainly affected by the number of users and the code

  2. ARCHY (Analysis and Reverse Engineering of Code Using Hierarchy and Yourdon): A tool for Fortran code maintenance and development

    SciTech Connect

    Aull, J.E.

    1990-10-01

    Analysis and Reverse Engineering of Code Using Hierarchy and Yourdon (ARCHY) diagrams is a tool for development and maintenance of FORTRAN programs. When FORTRAN source code is read by ARCHY, it automatically creates a database that includes a data dictionary, which lists each variable, its dimensions, type, category (set, referenced, passed), module calling structure, and common block information. The database exists in an ASCII file that can be directly edited or maintained with the ARCHY database editor. The database is used by ARCHY to product structure charts and Yourdon data flow diagrams in PostScript format. ARCHY also transfers database information such as a variable definitions, module descriptions, and technical references to and from module headers. ARCHY contains several utilities for making programs more readable. It can automatically indent the body of loops and conditionals and resequence statement labels. Various language extensions are translated into FORTRAN-77 to increase code portability. ARCHY frames comment statements and groups FORMAT statements at the end of modules. It can alphabetize modules within a program, end-of-line labels can be added, and it can also change executable statements to upper or lower case. ARCHY runs under the VAX-VMS operating system and inputs from VAX-FORTRAN, IBM-FORTRAN, and CRAY FORTRAN sources files.

  3. A Secure RFID Authentication Protocol Adopting Error Correction Code

    PubMed Central

    Zheng, Xinying; Chen, Pei-Yu

    2014-01-01

    RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance. PMID:24959619

  4. A robust low-rate coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.; Arikan, E. (Editor)

    1991-01-01

    Due to the rapidly evolving field of image processing and networking, video information promises to be an important part of telecommunication systems. Although up to now video transmission has been transported mainly over circuit-switched networks, it is likely that packet-switched networks will dominate the communication world in the near future. Asynchronous transfer mode (ATM) techniques in broadband-ISDN can provide a flexible, independent and high performance environment for video communication. For this paper, the network simulator was used only as a channel in this simulation. Mixture blocking coding with progressive transmission (MBCPT) has been investigated for use over packet networks and has been found to provide high compression rate with good visual performance, robustness to packet loss, tractable integration with network mechanics and simplicity in parallel implementation.

  5. A benchmark study for glacial isostatic adjustment codes

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2011-04-01

    The study of glacial isostatic adjustment (GIA) is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to loading is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements (e.g. GRACE and GOCE) to the projections of future sea level trends in response to climate change. Modern modelling approaches to GIA are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated; a community benchmark data set would clearly be valuable. Following the example of the mantle convection community, here we present, for the first time, the results of a benchmark study of codes designed to model GIA. This has taken place within a collaboration facilitated through European Cooperation in Science and Technology (COST) Action ES0701. The approaches benchmarked are based on significantly different codes and different techniques. The test computations are based on models with spherical symmetry and Maxwell rheology and include inputs from different methods and solution techniques: viscoelastic normal modes, spectral-finite elements and finite elements. The tests involve the loading and tidal Love numbers and their relaxation spectra, the deformation and gravity variations driven by surface loads characterized by simple geometry and time history and the rotational fluctuations in response to glacial unloading. In spite of the significant differences in the numerical methods employed, the test computations show a satisfactory agreement between the results provided by the participants.

  6. Analysis of a two-dimensional type 6 shock-interference pattern using a perfect-gas code and a real-gas code

    NASA Technical Reports Server (NTRS)

    Bertin, J. J.; Graumann, B. W.

    1973-01-01

    Numerical codes were developed to calculate the two dimensional flow field which results when supersonic flow encounters double wedge configurations whose angles are such that a type 4 pattern occurs. The flow field model included the shock interaction phenomena for a delta wing orbiter. Two numerical codes were developed, one which used the perfect gas relations and a second which incorporated a Mollier table to define equilibrium air properties. The two codes were used to generate theoretical surface pressure and heat transfer distributions for velocities from 3,821 feet per second to an entry condition of 25,000 feet per second.

  7. MUXS: a code to generate multigroup cross sections for sputtering calculations

    SciTech Connect

    Hoffman, T.J.; Robinson, M.T.; Dodds, H.L. Jr.

    1982-10-01

    This report documents MUXS, a computer code to generate multigroup cross sections for charged particle transport problems. Cross sections generated by MUXS can be used in many multigroup transport codes, with minor modifications to these codes, to calculate sputtering yields, reflection coefficients, penetration distances, etc.

  8. Code-Switching in English as a Foreign Language Classroom: Teachers' Attitudes

    ERIC Educational Resources Information Center

    Ibrahim, Engku Haliza Engku; Shah, Mohamed Ismail Ahamad; Armia, Najwa Tgk.

    2013-01-01

    Code-switching has always been an intriguing phenomenon to sociolinguists. While the general attitude to it seems negative, people seem to code-switch quite frequently. Teachers of English as a foreign language too frequently claim that they do not like to code-switch in the language classroom for various reasons--many are of the opinion that only…

  9. 76 FR 57795 - Agency Request for Renewal of a Previously Approved Collection; Disclosure of Code Sharing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-16

    ... Code Sharing Arrangements and Long-Term Wet Leases AGENCY: Office of the Secretary. ACTION: Notice and... 20590. SUPPLEMENTARY INFORMATION: OMB Control Number: 2105-0537. Title: Disclosure of Code Sharing... between cooperating carriers, at least one of the airline designator codes used on a flight is...

  10. A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.

    ERIC Educational Resources Information Center

    Bobka, Marilyn E.; Subramaniam, J.B.

    The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

  11. Stability codes for a liquid rocket implemented for use on a PC

    NASA Astrophysics Data System (ADS)

    Armstrong, Wilbur; Doane, George C., III; Dean, Garvin

    1992-06-01

    The high frequency code has been made an interactive code using FORTRAN 5.0. The option to plot n-tau curves was added using the graphics routines of FORTRAN 5.0 and GRAFMATIC. The user is now able to run with input values non-dimensional (as in the original code) or dimensional. Input data may be modified from the keyboard. The low and intermediate frequency codes have been run through a set of variations. This will help the user to understand how the stability of a configuration will change if any of the input data changes.

  12. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  13. A simple model of optimal population coding for sensory systems.

    PubMed

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  14. Molecular reconstruction of a fungal genetic code alteration.

    PubMed

    Mateus, Denisa D; Paredes, João A; Español, Yaiza; Ribas de Pouplana, Lluís; Moura, Gabriela R; Santos, Manuel A S

    2013-06-01

    Fungi of the CTG clade translate the Leu CUG codon as Ser. This genetic code alteration is the only eukaryotic sense-to-sense codon reassignment known to date, is mediated by an ambiguous serine tRNA (tRNACAG(Ser)), exposes unanticipated flexibility of the genetic code and raises major questions about its selection and fixation in this fungal lineage. In particular, the origin of the tRNACAG(Ser) and the evolutionary mechanism of CUG reassignment from Leu to Ser remain poorly understood. In this study, we have traced the origin of the tDNACAG(Ser) gene and studied critical mutations in the tRNACAG(Ser) anticodon-loop that modulated CUG reassignment. Our data show that the tRNACAG(Ser) emerged from insertion of an adenosine in the middle position of the 5'-CGA-3'anticodon of a tRNACGA(Ser) ancestor, producing the 5'-CAG-3' anticodon of the tRNACAG(Ser), without altering its aminoacylation properties. This mutation initiated CUG reassignment while two additional mutations in the anticodon-loop resolved a structural conflict produced by incorporation of the Leu 5'-CAG-3'anticodon in the anticodon-arm of a tRNA(Ser). Expression of the mutant tRNACAG(Ser) in yeast showed that it cannot be expressed at physiological levels and we postulate that such downregulation was essential to maintain Ser misincorporation at sub-lethal levels during the initial stages of CUG reassignment. We demonstrate here that such low level CUG ambiguity is advantageous in specific ecological niches and we propose that misreading tRNAs are targeted for degradation by an unidentified tRNA quality control pathway.

  15. Molecular reconstruction of a fungal genetic code alteration

    PubMed Central

    Mateus, Denisa D.; Paredes, João A.; Español, Yaiza; Ribas de Pouplana, Lluís; Moura, Gabriela R.; Santos, Manuel A.S.

    2013-01-01

    Fungi of the CTG clade translate the Leu CUG codon as Ser. This genetic code alteration is the only eukaryotic sense-to-sense codon reassignment known to date, is mediated by an ambiguous serine tRNA (tRNACAGSer), exposes unanticipated flexibility of the genetic code and raises major questions about its selection and fixation in this fungal lineage. In particular, the origin of the tRNACAGSer and the evolutionary mechanism of CUG reassignment from Leu to Ser remain poorly understood. In this study, we have traced the origin of the tDNACAGSer gene and studied critical mutations in the tRNACAGSer anticodon-loop that modulated CUG reassignment. Our data show that the tRNACAGSer emerged from insertion of an adenosine in the middle position of the 5′-CGA-3′anticodon of a tRNACGASer ancestor, producing the 5′-CAG-3′ anticodon of the tRNACAGSer, without altering its aminoacylation properties. This mutation initiated CUG reassignment while two additional mutations in the anticodon-loop resolved a structural conflict produced by incorporation of the Leu 5′-CAG-3′anticodon in the anticodon-arm of a tRNASer. Expression of the mutant tRNACAGSer in yeast showed that it cannot be expressed at physiological levels and we postulate that such downregulation was essential to maintain Ser misincorporation at sub-lethal levels during the initial stages of CUG reassignment. We demonstrate here that such low level CUG ambiguity is advantageous in specific ecological niches and we propose that misreading tRNAs are targeted for degradation by an unidentified tRNA quality control pathway. PMID:23619021

  16. Assessment of MARMOT. A Mesoscale Fuel Performance Code

    SciTech Connect

    Tonks, M. R.; Schwen, D.; Zhang, Y.; Chakraborty, P.; Bai, X.; Fromm, B.; Yu, J.; Teague, M. C.; Andersson, D. A.

    2015-04-01

    MARMOT is the mesoscale fuel performance code under development as part of the US DOE Nuclear Energy Advanced Modeling and Simulation Program. In this report, we provide a high level summary of MARMOT, its capabilities, and its current state of validation. The purpose of MARMOT is to predict the coevolution of microstructure and material properties of nuclear fuel and cladding. It accomplished this using the phase field method coupled to solid mechanics and heat conduction. MARMOT is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE), and much of its basic capability in the areas of the phase field method, mechanics, and heat conduction come directly from MOOSE modules. However, additional capability specific to fuel and cladding is available in MARMOT. While some validation of MARMOT has been completed in the areas of fission gas behavior and grain growth, much more validation needs to be conducted. However, new mesoscale data needs to be obtained in order to complete this validation.

  17. A geometric view on early and middle level visual coding.

    PubMed

    Barth, E

    2000-01-01

    As opposed to dealing with the geometry of objects in the 3D world, this paper considers the geometry of the visual input itself, i.e. the geometry of the spatio-temporal hypersurface defined by image intensity as a function of two spatial coordinates and time. The results show how the Riemann curvature tensor of this hypersurface represents speed and direction of motion, and thereby allows to predict global motion percepts and properties of MT neurons. It is argued that important aspects of early and middle level visual coding may be understood as resulting from basic geometric processing of the spatio-temporal visual input. Finally, applications show that the approach can improve the computation of motion.

  18. A large scale code resolution service network in the Internet of Things.

    PubMed

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-11-07

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.

  19. A Large Scale Code Resolution Service Network in the Internet of Things

    PubMed Central

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  20. A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    Johnson, J.O.

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  1. Design of zero reference codes by means of a global optimization method

    NASA Astrophysics Data System (ADS)

    Saez Landete, José; Alonso, José; Bernabeu, Eusebio

    2005-01-01

    The grating measurement systems can be used for displacement and angle measurements. They require of zero reference codes to obtain zero reference signals and absolute measures. The zero reference signals are obtained from the autocorrelation of two identical zero reference codes. The design of codes which generate optimum signals is rather complex, especially for larges codes. In this paper we present a global optimization method, a DIRECT algorithm for the design of zero reference codes. This method proves to be a powerful tool for solving this inverse problem.

  2. Design of zero reference codes by means of a global optimization method.

    PubMed

    Saez-Landete, José; Alonso, José; Bernabeu, Eusebio

    2005-01-10

    The grating measurement systems can be used for displacement and angle measurements. They require of zero reference codes to obtain zero reference signals and absolute measures. The zero reference signals are obtained from the autocorrelation of two identical zero reference codes. The design of codes which generate optimum signals is rather complex, especially for larges codes. In this paper we present a global optimization method, a DIRECT algorithm for the design of zero reference codes. This method proves to be a powerful tool for solving this inverse problem.

  3. Is it Code Imperfection or 'garbage in Garbage Out'? Outline of Experiences from a Comprehensive Adr Code Verification

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2013-12-01

    ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and

  4. The Evolution of a Coding Schema in a Paced Program of Research

    ERIC Educational Resources Information Center

    Winters, Charlene A.; Cudney, Shirley; Sullivan, Therese

    2010-01-01

    A major task involved in the management, analysis, and integration of qualitative data is the development of a coding schema to facilitate the analytic process. Described in this paper is the evolution of a coding schema that was used in the analysis of qualitative data generated from online forums of middle-aged women with chronic conditions who…

  5. Assessement of Codes and Standards Applicable to a Hydrogen Production Plant Coupled to a Nuclear Reactor

    SciTech Connect

    M. J. Russell

    2006-06-01

    This is an assessment of codes and standards applicable to a hydrogen production plant to be coupled to a nuclear reactor. The result of the assessment is a list of codes and standards that are expected to be applicable to the plant during its design and construction.

  6. Diagnostic Coding for Epilepsy.

    PubMed

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  7. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  8. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  9. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    SciTech Connect

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  10. TACI: a code for interactive analysis of neutron data produced by a tissue equivalent proportional counter

    SciTech Connect

    Cummings, F.M.

    1984-06-01

    The TEPC analysis code (TACI) is a computer program designed to analyze pulse height data generated by a tissue equivalent proportional counter (TEPC). It is written in HP BASIC and is for use on an HP-87XM personal computer. The theory of TEPC analysis upon which this code is based is summarized.

  11. Honor Codes and Other Contextual Influences on Academic Integrity: A Replication and Extension to Modified Honor Code Settings.

    ERIC Educational Resources Information Center

    McCabe, Donald L.; Trevino, Linda Klebe; Butterfield, Kenneth D.

    2002-01-01

    Investigated the influence of modified honor codes, an alternative to traditional codes that is gaining popularity on larger campuses. Also tested the model of student academic dishonesty previously suggested by McCabe and Trevino. Found that modified honor codes are associated with lower levels of student dishonesty and that the McCabe Trevino…

  12. Building energy codes as a tool for energy efficiency: Examining implementation in Kentucky

    NASA Astrophysics Data System (ADS)

    Zwicker, Brittany L.

    2011-12-01

    Kentucky adopted the 2009 IECC residential energy code in 2011 and is developing a plan for achieving 90 percent compliance with the code. This report examines recommendations for energy code implementation from various expert sources and then compares them to Kentucky's current and planned future procedures for energy code adoption, implementation, and enforcement. It seeks to answer the question: To what extent is Kentucky following expert recommendations as it moves toward adopting and planning for implementation and enforcement of the IECC 2009? The report concludes with recommendations to the Kentucky Board of Housing, Buildings, and Construction for increasing residential energy code compliance and suggestions for exploring increased utility investments in energy efficiency.

  13. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Kasami, Tadao; Fujiwara, Toru; Takata, Toyoo; Lin, Shu

    1988-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error-correcting codes, called the inner and outer codes. Its error performance is analyzed for a binary symmetric channel with bit-error rate epsilon less than 1/2. It is shown that, if the inner and outer codes are chosen properly, high reliability can be attained even for a high-channel bit-error rate. Specific examples with inner codes ranging from high rates and Reed-Solomon codes as outer codes are considered, and their error probabilities evaluated. They all provide high reliability even for high bit-error rates, say 0.1-0.01. Several example schemes are being considered for satellite and spacecraft downlink error control.

  14. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  15. Alternative knowledge acquisition: Developing a pulse coded neural network

    SciTech Connect

    Dress, W.B.

    1987-01-01

    After a Rip-van-Winkle nap of more than 20 years, the ideas of biologically motivated computing are re-emerging. Instrumental to this awakening have been the highly publicized contributions of John Hopfield and major advances in the neurosciences. In 1982, Hopfield showed how a system of maximally coupled neutron-like elements described by a Hamiltonian formalism (a linear, conservative system) could behave in a manner startlingly suggestive of the way humans might go about solving problems and retrieving memories. Continuing advances in the neurosciences are providing a coherent basis in suggesting how nature's neurons might function. A particular model is described for an artificial neural system designed to interact with (learn from and manipulate) a simulated (or real) environment. The model is based on early work by Iben Browning. The Browning model, designed to investigate computer-based intelligence, contains a particular simplification based on observations of frequency coding of information in the brain and information flow from receptors to the brain and back to effectors. The ability to act on and react to the environment was seen as an important principle, leading to self-organization of the system.

  16. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  17. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  18. HELIOS: A new open-source radiative transfer code

    NASA Astrophysics Data System (ADS)

    Malik, Matej; Grosheintz, Luc; Lukas Grimm, Simon; Mendonça, João; Kitzmann, Daniel; Heng, Kevin

    2015-12-01

    I present the new open-source code HELIOS, developed to accurately describe radiative transfer in a wide variety of irradiated atmospheres. We employ a one-dimensional multi-wavelength two-stream approach with scattering. Written in Cuda C++, HELIOS uses the GPU’s potential of massive parallelization and is able to compute the TP-profile of an atmosphere in radiative equilibrium and the subsequent emission spectrum in a few minutes on a single computer (for 60 layers and 1000 wavelength bins).The required molecular opacities are obtained with the recently published code HELIOS-K [1], which calculates the line shapes from an input line list and resamples the numerous line-by-line data into a manageable k-distribution format. Based on simple equilibrium chemistry theory [2] we combine the k-distribution functions of the molecules H2O, CO2, CO & CH4 to generate a k-table, which we then employ in HELIOS.I present our results of the following: (i) Various numerical tests, e.g. isothermal vs. non-isothermal treatment of layers. (ii) Comparison of iteratively determined TP-profiles with their analytical parametric prescriptions [3] and of the corresponding spectra. (iii) Benchmarks of TP-profiles & spectra for various elemental abundances. (iv) Benchmarks of averaged TP-profiles & spectra for the exoplanets GJ1214b, HD189733b & HD209458b. (v) Comparison with secondary eclipse data for HD189733b, XO-1b & Corot-2b.HELIOS is being developed, together with the dynamical core THOR and the chemistry solver VULCAN, in the group of Kevin Heng at the University of Bern as part of the Exoclimes Simulation Platform (ESP) [4], which is an open-source project aimed to provide community tools to model exoplanetary atmospheres.-----------------------------[1] Grimm & Heng 2015, ArXiv, 1503.03806[2] Heng, Lyons & Tsai, Arxiv, 1506.05501Heng & Lyons, ArXiv, 1507.01944[3] e.g. Heng, Mendonca & Lee, 2014, ApJS, 215, 4H[4] exoclime.net

  19. Application of a Two-dimensional Unsteady Viscous Analysis Code to a Supersonic Throughflow Fan Stage

    NASA Technical Reports Server (NTRS)

    Steinke, Ronald J.

    1989-01-01

    The Rai ROTOR1 code for two-dimensional, unsteady viscous flow analysis was applied to a supersonic throughflow fan stage design. The axial Mach number for this fan design increases from 2.0 at the inlet to 2.9 at the outlet. The Rai code uses overlapped O- and H-grids that are appropriately packed. The Rai code was run on a Cray XMP computer; then data postprocessing and graphics were performed to obtain detailed insight into the stage flow. The large rotor wakes uniformly traversed the rotor-stator interface and dispersed as they passed through the stator passage. Only weak blade shock losses were computerd, which supports the design goals. High viscous effects caused large blade wakes and a low fan efficiency. Rai code flow predictions were essentially steady for the rotor, and they compared well with Chima rotor viscous code predictions based on a C-grid of similar density.

  20. Rewriting the Epigenetic Code for Tumor Resensitization: A Review

    PubMed Central

    Oronsky, Bryan; Oronsky, Neil; Scicinski, Jan; Fanger, Gary; Lybeck, Michelle; Reid, Tony

    2014-01-01

    In cancer chemotherapy, one axiom, which has practically solidified into dogma, is that acquired resistance to antitumor agents or regimens, nearly inevitable in all patients with metastatic disease, remains unalterable and irreversible, rendering therapeutic rechallenge futile. However, the introduction of epigenetic therapies, including histone deacetylase inhibitors (HDACis) and DNA methyltransferase inhibitors (DNMTIs), provides oncologists, like computer programmers, with new techniques to “overwrite” the modifiable software pattern of gene expression in tumors and challenge the “one and done” treatment prescription. Taking the epigenetic code-as-software analogy a step further, if chemoresistance is the product of multiple nongenetic alterations, which develop and accumulate over time in response to treatment, then the possibility to hack or tweak the operating system and fall back on a “system restore” or “undo” feature, like the arrow icon in the Windows XP toolbar, reconfiguring the tumor to its baseline nonresistant state, holds tremendous promise for turning advanced, metastatic cancer from a fatal disease into a chronic, livable condition. This review aims 1) to explore the potential mechanisms by which a group of small molecule agents including HDACis (entinostat and vorinostat), DNMTIs (decitabine and 5-azacytidine), and redox modulators (RRx-001) may reprogram the tumor microenvironment from a refractory to a nonrefractory state, 2) highlight some recent findings, and 3) discuss whether the current “once burned forever spurned” paradigm in the treatment of metastatic disease should be revised to promote active resensitization attempts with formerly failed chemotherapies. PMID:25389457

  1. A neuronal learning rule for sub-millisecond temporal coding

    NASA Astrophysics Data System (ADS)

    Gerstner, Wulfram; Kempter, Richard; van Hemmen, J. Leo; Wagner, Hermann

    1996-09-01

    A PARADOX that exists in auditory and electrosensory neural systems1,2 is that they encode behaviourally relevant signals in the range of a few microseconds with neurons that are at least one order of magnitude slower. The importance of temporal coding in neural information processing is not clear yet3-8. A central question is whether neuronal firing can be more precise than the time constants of the neuronal processes involved9. Here we address this problem using the auditory system of the barn owl as an example. We present a modelling study based on computer simulations of a neuron in the laminar nucleus. Three observations explain the paradox. First, spiking of an 'integrate-and-fire' neuron driven by excitatory postsynaptic potentials with a width at half-maximum height of 250 μs, has an accuracy of 25 μs if the presynaptic signals arrive coherently. Second, the necessary degree of coherence in the signal arrival times can be attained during ontogenetic development by virtue of an unsupervised hebbian learning rule. Learning selects connections with matching delays from a broad distribution of axons with random delays. Third, the learning rule also selects the correct delays from two independent groups of inputs, for example, from the left and right ear.

  2. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  3. Financial and clinical governance implications of clinical coding accuracy in neurosurgery: a multidisciplinary audit.

    PubMed

    Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar

    2010-04-01

    Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.

  4. A silicon-based surface code quantum computer

    NASA Astrophysics Data System (ADS)

    O'Gorman, Joe; Nickerson, Naomi H.; Ross, Philipp; Morton, John Jl; Benjamin, Simon C.

    2016-02-01

    Individual impurity atoms in silicon can make superb individual qubits, but it remains an immense challenge to build a multi-qubit processor: there is a basic conflict between nanometre separation desired for qubit-qubit interactions and the much larger scales that would enable control and addressing in a manufacturable and fault-tolerant architecture. Here we resolve this conflict by establishing the feasibility of surface code quantum computing using solid-state spins, or ‘data qubits’, that are widely separated from one another. We use a second set of ‘probe’ spins that are mechanically separate from the data qubits and move in and out of their proximity. The spin dipole-dipole interactions give rise to phase shifts; measuring a probe’s total phase reveals the collective parity of the data qubits along the probe’s path. Using a protocol that balances the systematic errors due to imperfect device fabrication, our detailed simulations show that substantial misalignments can be handled within fault-tolerant operations. We conclude that this simple ‘orbital probe’ architecture overcomes many of the difficulties facing solid-state quantum computing, while minimising the complexity and offering qubit densities that are several orders of magnitude greater than other systems.

  5. GRMHD Simulations of Jet Formation with a New Code

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.

    2006-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic (GRMHD) code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated, constrained transport scheme is used to maintain a divergence-free magnetic field. Various one-dimensional test problems in both special and general relativity show significant improvements over our previous model. We have performed simulations of jet formations from a geometrically thin accretion disk near both nonrotating and rotating black holes. The new simulation results show that the jet is formed in the same manner as in previous work and propagates outward. In the rotating black hole cases, jets form much closer to the black hole's ergosphere and the magnetic field is strongly twisted due the frame-dragging effect. As the magnetic field strength becomes weaker, a larger amount of matter is launched with the jet. On the other hand, when the magnetic field strength becomes stronger, the jet has less matter and becomes poynting-flux dominated. We will also discuss how the jet properties depend on the rotation of a black hole.

  6. A grid-based coulomb collision model for PIC codes

    SciTech Connect

    Jones, M.E.; Lemons, D.S.; Mason, R.J.; Thomas, V.A.; Winske, D.

    1996-01-01

    A new method is presented to model the intermediate regime between collisionless and Coulobm collision dominated plasmas in particle-in-cell codes. Collisional processes between particles of different species are treated throuqh the concept of a grid-based {open_quotes}collision field,{close_quotes} which can be particularly efficient for multi-dimensional applications. In this method, particles are scattered using a force which is determined from the moments of the distribution functions accumulated on the grid. The form of the force is such to reproduce themulti-fluid transport equations through the second (energy) moment. Collisions between particles of the same species require a separate treatment. For this, a Monte Carlo-like scattering method based on the Langevin equation is used. The details of both methods are presented, and their implementation in a new hybrid (particle ion, massless fluid electron) algorithm is described. Aspects of the collision model are illustrated through several one- and two-dimensional test problems as well as examples involving laser produced colliding plasmas.

  7. A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler

    1998-10-01

    The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.

  8. Development of a robust mapping between AIS 2+ and ICD-9 injury codes.

    PubMed

    Barnard, Ryan T; Loftis, Kathryn L; Martin, R Shayn; Stitzel, Joel D

    2013-03-01

    Motor vehicle crashes result in millions of injuries and thousands of deaths each year in the United States. While most crash research datasets use Abbreviated Injury Scale (AIS) codes to identify injuries, most hospital datasets use the International Classification of Diseases, version 9 (ICD-9) codes. The objective of this research was to establish a one-to-one mapping between AIS and ICD-9 codes for use with motor vehicle crash injury research. This paper presents results from investigating different mapping approaches using the most common AIS 2+ injuries from the National Automotive Sampling System-Crashworthiness Data System (NASS-CDS). The mapping approaches were generated from the National Trauma Data Bank (NTDB) (428,637 code pairs), ICDMAP (2500 code pairs), and the Crash Injury Research and Engineering Network (CIREN) (4125 code pairs). Each approach may pair given AIS code with more than one ICD-9 code (mean number of pairs per AIS code: NTDB=211, ICDMAP=7, CIREN=5), and some of the potential pairs are unrelated. The mappings were evaluated using two comparative metrics coupled with qualitative inspection by an expert physician. Based on the number of false mappings and correct pairs, the best mapping was derived from CIREN. AIS and ICD-9 codes in CIREN are both manually coded, leading to more proper mappings between the two. Using the mapping presented herein, data from crash and hospital datasets can be used together to better understand and prevent motor vehicle crash injuries in the future.

  9. A Noise-Aware Coding Scheme for Texture Classification

    PubMed Central

    Shoyaib, Mohammad; Abdullah-Al-Wadud, M.; Chae, Oksam

    2011-01-01

    Texture-based analysis of images is a very common and much discussed issue in the fields of computer vision and image processing. Several methods have already been proposed to codify texture micro-patterns (texlets) in images. Most of these methods perform well when a given image is noise-free, but real world images contain different types of signal-independent as well as signal-dependent noises originated from different sources, even from the camera sensor itself. Hence, it is necessary to differentiate false textures appearing due to the noises, and thus, to achieve a reliable representation of texlets. In this proposal, we define an adaptive noise band (ANB) to approximate the amount of noise contamination around a pixel up to a certain extent. Based on this ANB, we generate reliable codes named noise tolerant ternary pattern (NTTP) to represent the texlets in an image. Extensive experiments on several datasets from renowned texture databases, such as the Outex and the Brodatz database, show that NTTP performs much better than the state-of-the-art methods. PMID:22164060

  10. HYDRA, A finite element computational fluid dynamics code: User manual

    SciTech Connect

    Christon, M.A.

    1995-06-01

    HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.

  11. A Software Upgrade of the NASA Aeroheating Code "MINIVER"

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce Mathew

    2013-01-01

    Computational Fluid Dynamics (CFD) is a powerful and versatile tool simulating fluid and thermal environments of launch and re-entry vehicles alike. Where it excels in power and accuracy, however, it lacks in speed. An alternative tool for this purpose is known as MINIVER, an aeroheating code widely used by NASA and within the aerospace industry. Capable of providing swift, reasonably accurate approximations of the fluid and thermal environment of launch vehicles, MINIVER is used where time is of the essence and accuracy need not be exact. However, MINIVER is an old, aging tool: running on a user-unfriendly, legacy command-line interface, it is difficult for it to keep pace with more modem software tools. Florida Institute of Technology was tasked with the construction of a new Graphical User Interface (GUI) that implemented the legacy version's capabilities and enhanced them with new tools and utilities. This thesis provides background to the legacy version of the program, the progression and final version of a modem user interface, and benchmarks to demonstrate its usefulness.

  12. Automated apparatus and method of generating native code for a stitching machine

    NASA Technical Reports Server (NTRS)

    Miller, Jeffrey L. (Inventor)

    2000-01-01

    A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.

  13. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-04

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions.

  14. Physics under the bonnet of a stellar evolution code

    NASA Astrophysics Data System (ADS)

    Stancliffe, Richard J.

    Just how good are modern stellar models? Providing a rigorous assessment of the uncertainties is difficult because of the multiplicity of input physics. Some of the ingredients are reasonably well-known (like reaction rates and opacities). Others are not so good, with convection standing out as a particularly obvious example. In some cases, it is not clear what the ingredients should be: what role do atomic diffusion, rotation, magnetic fields, etc. play in stellar evolution? All this is then compounded by computational method. In converting all this physics into something we can implement in a 1D evolution code, we are forced to make choices about the way the equations are solved, how we will treat mixing at convective boundaries, etc. All of this can impact the models one finally generates. In this review, I will attempt to assess the uncertainties associated with the ingredients and methods used by stellar evolution modellers, and what their impacts may be on the science that we wish to do.

  15. Ribosomal S27a coding sequences upstream of ubiquitin coding sequences in the genome of a pestivirus.

    PubMed

    Becher, P; Orlich, M; Thiel, H J

    1998-11-01

    Molecular characterization of cytopathogenic (cp) bovine viral diarrhea virus (BVDV) strain CP Rit, a temperature-sensitive strain widely used for vaccination, revealed that the viral genomic RNA is about 15.2 kb long, which is about 2.9 kb longer than the one of noncytopathogenic (noncp) BVDV strains. Molecular cloning and nucleotide sequencing of parts of the genome resulted in the identification of a duplication of the genomic region encoding nonstructural proteins NS3, NS4A, and part of NS4B. In addition, a nonviral sequence was found directly upstream of the second copy of the NS3 gene. The 3' part of this inserted sequence encodes an N-terminally truncated ubiquitin monomer. This is remarkable since all described cp BVDV strains with ubiquitin coding sequences contain at least one complete ubiquitin monomer. The 5' region of the nonviral sequence did not show any homology to cellular sequences identified thus far in cp BVDV strains. Databank searches revealed that this second cellular insertion encodes part of ribosomal protein S27a. Further analyses included molecular cloning and nucleotide sequencing of the cellular recombination partner. Sequence comparisons strongly suggest that the S27a and the ubiquitin coding sequences found in the genome of CP Rit were both derived from a bovine mRNA encoding a hybrid protein with the structure NH2-ubiquitin-S27a-COOH. Polyprotein processing in the genomic region encoding the N-terminal part of NS4B, the two cellular insertions, and NS3 was studied by a transient-expression assay. The respective analyses showed that the S27a-derived polypeptide, together with the truncated ubiquitin, served as processing signal to yield NS3, whereas the truncated ubiquitin alone was not capable of mediating the cleavage. Since the expression of NS3 is strictly correlated with the cp phenotype of BVDV, the altered genome organization leading to expression of NS3 most probably represents the genetic basis of cytopathogenicity of CP Rit.

  16. A New Approach to Model Pitch Perception Using Sparse Coding

    PubMed Central

    Furst, Miriam; Barak, Omri

    2017-01-01

    Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content–these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments. PMID:28099436

  17. Mother-lamb acoustic recognition in sheep: a frequency coding.

    PubMed Central

    Searby, Amanda; Jouventin, Pierre

    2003-01-01

    Ewes of the domestic sheep ( Ovis aries ) display selective maternal investment by restricting care to their own offspring and rejecting alien young. This trait relies on individual recognition processes between ewes and lambs. Whereas identification at the udder is only olfactory, distance recognition is performed through visual and acoustic cues. We studied the effectiveness and modalities of mutual acoustic recognition between ewes and lambs by spectrographic analysis of their vocal signatures and by playbacks of modified calls in the field. Our results show that ewes and their lambs can recognize each other based solely on their calls. The coding of identity within the vocal signatures, previously unknown in sheep, is similar in lamb and ewe: it uses the mean frequency and the spectral energy distribution of the call, namely the timbre of the call. These results point out a simple signature system in sheep that uses only the frequency domain. This engenders a signal with low information content, as opposed to some highly social birds and mammal species that may integrate information both in the temporal and spectral domains. The simplicity of this system is linked to the roles played by vision and olfaction that corroborate the information brought by the vocal signature. PMID:12964977

  18. SPQR: a Monte Carlo reactor kinetics code. [LMFBR

    SciTech Connect

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.

  19. COBRA-SFS: A thermal-hydraulic analysis code for spent fuel storage and transportation casks

    SciTech Connect

    Michener, T.E.; Rector, D.R.; Cuta, J.M.; Dodge, R.E.; Enderlin, C.W.

    1995-09-01

    COBRA-SFS is a general thermal-hydraulic analysis computer code for prediction of material temperatures and fluid conditions in a wide variety of systems. The code has been validated for analysis of spent fuel storage systems, as part of the Commercial Spent Fuel Management Program of the US Department of Energy. The code solves finite volume equations representing the conservation equations for mass, moment, and energy for an incompressible single-phase heat transfer fluid. The fluid solution is coupled to a finite volume solution of the conduction equation in the solid structure of the system. This document presents a complete description of Cycle 2 of COBRA-SFS, and consists of three main parts. Part 1 describes the conservation equations, constitutive models, and solution methods used in the code. Part 2 presents the User Manual, with guidance on code applications, and complete input instructions. This part also includes a detailed description of the auxiliary code RADGEN, used to generate grey body view factors required as input for radiative heat transfer modeling in the code. Part 3 describes the code structure, platform dependent coding, and program hierarchy. Installation instructions are also given for the various platform versions of the code that are available.

  20. Cars Thermometry in a Supersonic Combustor for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; Danehy, P. M.; Springer, R. R.; DeLoach, R.; Capriotti, D. P.

    2002-01-01

    An experiment has been conducted to acquire data for the validation of computational fluid dynamics (CFD) codes used in the design of supersonic combustors. The primary measurement technique is coherent anti-Stokes Raman spectroscopy (CARS), although surface pressures and temperatures have also been acquired. Modern- design- of-experiment techniques have been used to maximize the quality of the data set (for the given level of effort) and minimize systematic errors. The combustor consists of a diverging duct with single downstream- angled wall injector. Nominal entrance Mach number is 2 and enthalpy nominally corresponds to Mach 7 flight. Temperature maps are obtained at several planes in the flow for two cases: in one case the combustor is piloted by injecting fuel upstream of the main injector, the second is not. Boundary conditions and uncertainties are adequately characterized. Accurate CFD calculation of the flow will ultimately require accurate modeling of the chemical kinetics and turbulence-chemistry interactions as well as accurate modeling of the turbulent mixing

  1. Dysregulation of REST-regulated coding and non-coding RNAs in a cellular model of Huntington's disease.

    PubMed

    Soldati, Chiara; Bithell, Angela; Johnston, Caroline; Wong, Kee-Yew; Stanton, Lawrence W; Buckley, Noel J

    2013-02-01

    Huntingtin (Htt) protein interacts with many transcriptional regulators, with widespread disruption to the transcriptome in Huntington's disease (HD) brought about by altered interactions with the mutant Htt (muHtt) protein. Repressor Element-1 Silencing Transcription Factor (REST) is a repressor whose association with Htt in the cytoplasm is disrupted in HD, leading to increased nuclear REST and concomitant repression of several neuronal-specific genes, including brain-derived neurotrophic factor (Bdnf). Here, we explored a wide set of HD dysregulated genes to identify direct REST targets whose expression is altered in a cellular model of HD but that can be rescued by knock-down of REST activity. We found many direct REST target genes encoding proteins important for nervous system development, including a cohort involved in synaptic transmission, at least two of which can be rescued at the protein level by REST knock-down. We also identified several microRNAs (miRNAs) whose aberrant repression is directly mediated by REST, including miR-137, which has not previously been shown to be a direct REST target in mouse. These data provide evidence of the contribution of inappropriate REST-mediated transcriptional repression to the widespread changes in coding and non-coding gene expression in a cellular model of HD that may affect normal neuronal function and survival.

  2. Selection of a numerical unsaturated flow code for tilted capillary barrier performance evaluation

    SciTech Connect

    Webb, S.W.

    1996-09-01

    Capillary barriers consisting of tilted fine-over-coarse layers have been suggested as landfill covers as a means to divert water infiltration away from sensitive underground regions under unsaturated flow conditions, especially for arid and semi-arid regions. Typically, the HELP code is used to evaluate landfill cover performance and design. Unfortunately, due to its simplified treatment of unsaturated flow and its essentially one-dimensional nature, HELP is not adequate to treat the complex multidimensional unsaturated flow processes occurring in a tilted capillary barrier. In order to develop the necessary mechanistic code for the performance evaluation of tilted capillary barriers, an efficient and comprehensive unsaturated flow code needs to be selected for further use and modification. The present study evaluates a number of candidate mechanistic unsaturated flow codes for application to tilted capillary barriers. Factors considered included unsaturated flow modeling, inclusion of evapotranspiration, nodalization flexibility, ease of modification, and numerical efficiency. A number of unsaturated flow codes are available for use with different features and assumptions. The codes chosen for this evaluation are TOUGH2, FEHM, and SWMS{_}2D. All three codes chosen for this evaluation successfully simulated the capillary barrier problem chosen for the code comparison, although FEHM used a reduced grid. The numerical results are a strong function of the numerical weighting scheme. For the same weighting scheme, similar results were obtained from the various codes. Based on the CPU time of the various codes and the code capabilities, the TOUGH2 code has been selected as the appropriate code for tilted capillary barrier performance evaluation, possibly in conjunction with the infiltration, runoff, and evapotranspiration models of HELP. 44 refs.

  3. A supersonic three-dimensional code for flow over blunt bodies: Program documentation and test cases

    NASA Technical Reports Server (NTRS)

    Chaussee, D. S.; Mcmillan, O. J.

    1980-01-01

    The use of a computer code for the calculation of steady, supersonic, three dimensional, inviscid flow over blunt bodies is illustrated. Input and output are given and explained for two cases: a pointed code of 20 deg half angle at 15 deg angle of attack in a free stream with M sub infinite = 7, and a cone-ogive-cylinder at 10 deg angle of attack with M sub infinite = 2.86. A source listing of the computer code is provided.

  4. Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liu, Nan-Suey

    2005-01-01

    The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.

  5. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  6. A dual-sided coded-aperture radiation detection system

    NASA Astrophysics Data System (ADS)

    Penny, R. D.; Hood, W. E.; Polichar, R. M.; Cardone, F. H.; Chavez, L. G.; Grubbs, S. G.; Huntley, B. P.; Kuharski, R. A.; Shyffer, R. T.; Fabris, L.; Ziock, K. P.; Labov, S. E.; Nelson, K.

    2011-10-01

    We report the development of a large-area, mobile, coded-aperture radiation imaging system for localizing compact radioactive sources in three dimensions while rejecting distributed background. The 3D Stand-Off Radiation Detection System (SORDS-3D) has been tested at speeds up to 95 km/h and has detected and located sources in the millicurie range at distances of over 100 m. Radiation data are imaged to a geospatially mapped world grid with a nominal 1.25- to 2.5-m pixel pitch at distances out to 120 m on either side of the platform. Source elevation is also extracted. Imaged radiation alarms are superimposed on a side-facing video log that can be played back for direct localization of sources in buildings in urban environments. The system utilizes a 37-element array of 5×5×50 cm 3 cesium-iodide (sodium) detectors. Scintillation light is collected by a pair of photomultiplier tubes placed at either end of each detector, with the detectors achieving an energy resolution of 6.15% FWHM (662 keV) and a position resolution along their length of 5 cm FWHM. The imaging system generates a dual-sided two-dimensional image allowing users to efficiently survey a large area. Imaged radiation data and raw spectra are forwarded to the RadioNuclide Analysis Kit (RNAK), developed by our collaborators, for isotope ID. An intuitive real-time display aids users in performing searches. Detector calibration is dynamically maintained by monitoring the potassium-40 peak and digitally adjusting individual detector gains. We have recently realized improvements, both in isotope identification and in distinguishing compact sources from background, through the installation of optimal-filter reconstruction kernels.

  7. Medical terminology coding systems and medicolegal death investigation data: searching for a standardized method of electronic coding at a statewide medical examiner's office.

    PubMed

    Lathrop, Sarah L; Davis, Wayland L; Nolte, Kurt B

    2009-01-01

    Medical examiner and coroner reports are a rich source of data for epidemiologic research. To maximize the utility of this information, medicolegal death investigation data need to be electronically coded. In order to determine the best option for coding, we evaluated four different options (Current Procedural Terminology [CPT], International Classification of Disease [ICD] coding, Systematized Nomenclature of Medicine Clinical Terms [SNOMED CT], and an in-house system), then conducted internal and external needs assessments to determine which system best met the needs of a centralized, statewide medical examiner's office. Although all four systems offer distinct advantages and disadvantages, SNOMED CT is the most accurate for coding pathologic diagnoses, with ICD-10 the best option for classifying the cause of death. For New Mexico's Office of the Medical Investigator, the most feasible coding option is an upgrade of an in-house coding system, followed by linkage to ICD codes for cause of death from the New Mexico Bureau of Vital Records and Health Statistics, and ideally, SNOMED classification of pathologic diagnoses.

  8. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  9. Polish Code of Ethics of a Medical Laboratory Specialist

    PubMed Central

    2014-01-01

    Along with the development of medicine, increasingly significant role has been played by the laboratory diagnostics. For over ten years the profession of the medical laboratory specialist has been regarded in Poland as the autonomous medical profession and has enjoyed a status of one of public trust. The process of education of medical laboratory specialists consists of a five-year degree in laboratory medicine, offered at Medical Universities, and of a five-year Vocational Specialization in one of the fields of laboratory medicine such as clinical biochemistry, medical microbiology, medical laboratory toxicology, medical laboratory cytomorphology and medical laboratory transfusiology. An important component of medical laboratory specialists’ identity is awareness of inherited ethos obtained from bygone generations of workers in this particular profession and the need to continue its further development. An expression of this awareness is among others Polish Code of Ethics of a Medical Laboratory Specialist (CEMLS) containing a set of values and a moral standpoint characteristic of this type of professional environment. Presenting the ethos of the medical laboratory specialist is a purpose of this article. Authors focus on the role CEMLS plays in areas of professional ethics and law. Next, they reconstruct the Polish model of ethos of medical diagnostic laboratory personnel. An overall picture consists of a presentation of the general moral principles concerning execution of this profession and rules of conduct in relations with the patient, own professional environment and the rest of the society. Polish model of ethical conduct, which is rooted in Hippocratic medical tradition, harmonizes with the ethos of medical laboratory specialists of other European countries and the world. PMID:27683468

  10. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  11. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  12. A Combinatorial Code for Splicing Silencing: UAGG and GGGG Motifs

    PubMed Central

    An, Ping; Burge, Christopher B

    2005-01-01

    Alternative pre-mRNA splicing is widely used to regulate gene expression by tuning the levels of tissue-specific mRNA isoforms. Few regulatory mechanisms are understood at the level of combinatorial control despite numerous sequences, distinct from splice sites, that have been shown to play roles in splicing enhancement or silencing. Here we use molecular approaches to identify a ternary combination of exonic UAGG and 5′-splice-site-proximal GGGG motifs that functions cooperatively to silence the brain-region-specific CI cassette exon (exon 19) of the glutamate NMDA R1 receptor (GRIN1) transcript. Disruption of three components of the motif pattern converted the CI cassette into a constitutive exon, while predominant skipping was conferred when the same components were introduced, de novo, into a heterologous constitutive exon. Predominant exon silencing was directed by the motif pattern in the presence of six competing exonic splicing enhancers, and this effect was retained after systematically repositioning the two exonic UAGGs within the CI cassette. In this system, hnRNP A1 was shown to mediate silencing while hnRNP H antagonized silencing. Genome-wide computational analysis combined with RT-PCR testing showed that a class of skipped human and mouse exons can be identified by searches that preserve the sequence and spatial configuration of the UAGG and GGGG motifs. This analysis suggests that the multi-component silencing code may play an important role in the tissue-specific regulation of the CI cassette exon, and that it may serve more generally as a molecular language to allow for intricate adjustments and the coordination of splicing patterns from different genes. PMID:15828859

  13. Verification & Validation Toolkit to Assess Codes: Is it Theory Limitation, Numerical Method Inadequacy, Bug in the Code or a Serious Flaw?

    NASA Astrophysics Data System (ADS)

    Bombardelli, F. A.; Zamani, K.

    2014-12-01

    We introduce and discuss an open-source, user friendly, numerical post-processing piece of software to assess reliability of the modeling results of environmental fluid mechanics' codes. Verification and Validation, Uncertainty Quantification (VAVUQ) is a toolkit developed in Matlab© for general V&V proposes. In this work, The VAVUQ implementation of V&V techniques and user interfaces would be discussed. VAVUQ is able to read Excel, Matlab, ASCII, and binary files and it produces a log of the results in txt format. Next, each capability of the code is discussed through an example: The first example is the code verification of a sediment transport code, developed with the Finite Volume Method, with MES. Second example is a solution verification of a code for groundwater flow, developed with the Boundary Element Method, via MES. Third example is a solution verification of a mixed order, Compact Difference Method code of heat transfer via MMS. Fourth example is a solution verification of a 2-D, Finite Difference Method code of floodplain analysis via Complete Richardson Extrapolation. In turn, application of VAVUQ in quantitative model skill assessment studies (validation) of environmental codes is given through two examples: validation of a two-phase flow computational modeling of air entrainment in a free surface flow versus lab measurements and heat transfer modeling in the earth surface versus field measurement. At the end, we discuss practical considerations and common pitfalls in interpretation of V&V results.

  14. A Normative Code of Conduct for Admissions Officers

    ERIC Educational Resources Information Center

    Hodum, Robert L.

    2012-01-01

    The increasing competition for the desired quantity and quality of college students, along with the rise of for-profit institutions, has amplified the scrutiny of behavior and ethics among college admissions professionals and has increased the need for meaningful ethical guidelines and codes of conduct. Many other areas of responsibility within…

  15. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  16. A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy David; Krolik, Julian H.

    2013-01-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  17. ACFAC: a cash flow analysis code for estimating product price from an industrial operation

    SciTech Connect

    Delene, J.G.

    1980-04-01

    A computer code is presented which uses a discountted cash flow methodology to obtain an average product price for an industtrial process. The general discounted cash flow method is discussed. Special code options include multiple treatments of interest during construction and other preoperational costs, investment tax credits, and different methods for tax depreciation of capital assets. Two options for allocating the cost of plant decommissioning are available. The FORTRAN code listing and the computer output for a sample problem are included.

  18. A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES

    SciTech Connect

    Schnittman, Jeremy D.; Krolik, Julian H. E-mail: jhk@pha.jhu.edu

    2013-11-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  19. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  20. (R, S)-Norm Information Measure and A Relation Between Coding and Questionnaire Theory

    NASA Astrophysics Data System (ADS)

    Joshi, Rajesh; Kumar, Satish

    2016-10-01

    In this paper, we introduce a quantity which is called (R, S)-norm entropy and discuss some of its major properties in comparison with Shannon’s and other entropies known in the literature. Further, we give an application of (R, S)-norm entropy in coding theory and a coding theorem analogous to the ordinary coding theorem for a noiseless channel. The theorem states that the proposed entropy is the lower bound of mean code word length. Further, we give an application of (R, S)-norm entropy and noiseless coding theorem in questionnaire theory. We show that the relationship between noiseless coding theorem and questionnaire theory through a charging scheme based on the resolution of questions and lower bound on the measure of the charge can also be obtained.

  1. Intrabeam Scattering Studies for the ILC Damping Rings Using a NewMATLAB Code

    SciTech Connect

    Reichel, I.; Wolski, A.

    2006-06-21

    A new code to calculate the effects of intrabeam scattering (IBS) has been developed in MATLAB based on the approximation suggested by K. Bane. It interfaces with the Accelerator Toolbox but can also read in lattice functions from other codes. The code has been benchmarked against results from other codes for the ATF that use this approximation or do the calculation in a different way. The new code has been used to calculate the emittance growth due to intrabeam scattering for the lattices currently proposed for the ILC Damping Rings, as IBS is a concern, especially for the electron ring. A description of the code and its user interface, as well as results for the Damping Rings, will be presented.

  2. A Mechanism to Avoid Collusion Attacks Based on Code Passing in Mobile Agent Systems

    NASA Astrophysics Data System (ADS)

    Jaimez, Marc; Esparza, Oscar; Muñoz, Jose L.; Alins-Delgado, Juan J.; Mata-Díaz, Jorge

    Mobile agents are software entities consisting of code, data, state and itinerary that can migrate autonomously from host to host executing their code. Despite its benefits, security issues strongly restrict the use of code mobility. The protection of mobile agents against the attacks of malicious hosts is considered the most difficult security problem to solve in mobile agent systems. In particular, collusion attacks have been barely studied in the literature. This paper presents a mechanism that avoids collusion attacks based on code passing. Our proposal is based on a Multi-Code agent, which contains a different variant of the code for each host. A Trusted Third Party is responsible for providing the information to extract its own variant to the hosts, and for taking trusted timestamps that will be used to verify time coherence.

  3. QR code based noise-free optical encryption and decryption of a gray scale image

    NASA Astrophysics Data System (ADS)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  4. TVENT1: a computer code for analyzing tornado-induced flow in ventilation systems

    SciTech Connect

    Andrae, R.W.; Tang, P.K.; Gregory, W.S.

    1983-07-01

    TVENT1 is a new version of the TVENT computer code, which was designed to predict the flows and pressures in a ventilation system subjected to a tornado. TVENT1 is essentially the same code but has added features for turning blowers off and on, changing blower speeds, and changing the resistance of dampers and filters. These features make it possible to depict a sequence of events during a single run. Other features also have been added to make the code more versatile. Example problems are included to demonstrate the code's applications.

  5. Clustering of neural code words revealed by a first-order phase transition

    NASA Astrophysics Data System (ADS)

    Huang, Haiping; Toyoizumi, Taro

    2016-06-01

    A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.

  6. Fast synchronization recovery for lossy image transmission with a suffix-rich Huffman code

    NASA Astrophysics Data System (ADS)

    Yang, Te-Chung; Kuo, C.-C. Jay

    1998-10-01

    A new entropy codec, which can recover quickly from the loss of synchronization due to the occurrence of transmission errors, is proposed and applied to wireless image transmission in this research. This entropy codec is designed based on the Huffman code with a careful choice of the assignment of 1's and 0's to each branch of the Huffman tree. The design satisfies the suffix-rich property, i.e. the number of a codeword to be the suffix of other codewords is maximized. After the Huffman coding tree is constructed, the source can be coded by using the traditional Huffman code. Thus, this coder does not introduce any overhead to sacrifice its coding efficiency. Statistically, the decoder can automatically recover the lost synchronization with the shortest error propagation length. Experimental results show that fast synchronization recovery reduces quality degradation on the reconstructed image while maintaining the same coding efficiency.

  7. A Multi-Alphabet Arithmetic Coding Hardware Implementation for Small FPGA Devices

    NASA Astrophysics Data System (ADS)

    Biasizzo, Anton; Novak, Franc; Korošec, Peter

    2013-01-01

    Arithmetic coding is a lossless compression algorithm with variable-length source coding. It is more flexible and efficient than the well-known Huffman coding. In this paper we present a non-adaptive FPGA implementation of a multi-alphabet arithmetic coding with separated statistical model of the data source. The alphabet of the data source is a 256-symbol ASCII character set and does not include the special end-of-file symbol. No context switching is used in the proposed design which gives maximal throughput without pipelining. We have synthesized the design for Xilinx FPGA devices and used their built-in hardware resources.

  8. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    SciTech Connect

    Smith, A.B.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  9. Code OK2—A simulation code of ion-beam illumination on an arbitrary shape and structure target

    NASA Astrophysics Data System (ADS)

    Ogoyski, A. I.; Kawata, S.; Someya, T.

    2004-08-01

    For computer simulations on heavy ion beam (HIB) irradiation on a spherical fuel pellet in heavy ion fusion (HIF) the code OK1 was developed and presented in [Comput. Phys. Commun. 157 (2004) 160-172]. The new code OK2 is a modified upgraded computer program for more common purposes in research fields of medical treatment, material processing as well as HIF. OK2 provides computational capabilities of a three-dimensional ion beam energy deposition on a target with an arbitrary shape and structure. Program summaryTitle of program: OK2 Catalogue identifier: ADTZ Other versions of this program [1] : Title of the program: OK1 Catalogue identifier: ADST Program summary URL:http://cpc.cs.qub.as.uk/summaries/ADTZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC (Pentium 4, ˜1 GHz or more recommended) Operating system: Windows or UNIX Program language used: C++ Memory required to execute with typical data: 2048 MB No. of bits in a word: 32 No. of processors used: 1CPU Has the code been vectorized or parallelized: No No. of bytes in distributed program, including test data: 17 334 No of lines in distributed program, including test date: 1487 Distribution format: tar gzip file Nature of physical problem: In research areas of HIF (Heavy Ion Beam Inertial Fusion) energy [1-4] and medical material sciences [5], ion energy deposition profiles need to be evaluated and calculated precisely. Due to a favorable energy deposition behavior of ions in matter [1-4] it is expected that ion beams would be one of preferable candidates in various fields including HIF and material processing. Especially in HIF for a successful fuel ignition and a sufficient fusion energy release, a stringent requirement is imposed on the HIB irradiation non-uniformity, which should be less than a few percent [4,6,7]. In order to meet this requirement we need to evaluate the uniformity of a realistic HIB irradiation and energy deposition pattern. The HIB

  10. Codes of environmental management practice: Assessing their potential as a tool for change

    SciTech Connect

    Nash, J.; Ehrenfeld, J.

    1997-12-31

    Codes of environmental management practice emerged as a tool of environmental policy in the late 1980s. Industry and other groups have developed codes for two purposes: to change the environmental behavior of participating firms and to increase public confidence in industry`s commitment to environmental protection. This review examines five codes of environmental management practice: Responsible Care, the International Chamber of Commerce`s Business Charter for Sustainable Development, ISO 14000, the CERES Principles, and The Natural Step. The first three codes have been drafted and promoted primarily by industry; the others have been developed by non-industry groups. These codes have spurred participating firms to introduce new practices, including the institution of environmental management systems, public environmental reporting, and community advisory panels. The extent to which codes are introducing a process of cultural change is considered in terms of four dimensions: new consciousness, norms, organization, and tools. 94 refs., 3 tabs.

  11. The barriers to clinical coding in general practice: a literature review.

    PubMed

    de Lusignan, S

    2005-06-01

    Clinical coding is variable in UK general practice. The reasons for this remain undefined. This review explains why there are no readily available alternatives to recording structured clinical data and reviews the barriers to recording structured clinical data. Methods used included a literature review of bibliographic databases, university health informatics departments, and national and international medical informatics associations. The results show that the current state of development of computers and data processing means there is no practical alternative to coding data. The identified barriers to clinical coding are: the limitations of the coding systems and terminologies and the skill gap in their use; recording structured data in the consultation takes time and is distracting; the level of motivation of primary care professionals; and the priority within the organization. A taxonomy is proposed to describe the barriers to clinical coding. This can be used to identify barriers to coding and facilitate the development of strategies to overcome them.

  12. HZETRN: A heavy ion/nucleon transport code for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.

    1991-01-01

    The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.

  13. Evaluation in industry of a draft code of practice for manual handling.

    PubMed

    Ashby, Liz; Tappin, David; Bentley, Tim

    2004-05-01

    This paper reports findings from a study which evaluated the draft New Zealand Code of Practice for Manual Handling. The evaluation assessed the ease of use, applicability and validity of the Code and in particular the associated manual handling hazard assessment tools, within New Zealand industry. The Code was studied in a sample of eight companies from four sectors of industry. Subjective feedback and objective findings indicated that the Code was useful, applicable and informative. The manual handling hazard assessment tools incorporated in the Code could be adequately applied by most users, with risk assessment outcomes largely consistent with the findings of researchers using more specific ergonomics methodologies. However, some changes were recommended to the risk assessment tools to improve usability and validity. The evaluation concluded that both the Code and the tools within it would benefit from simplification, improved typography and layout, and industry-specific information on manual handling hazards.

  14. The weight hierarchies and chain condition of a class of codes from varieties over finite fields

    NASA Technical Reports Server (NTRS)

    Wu, Xinen; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    The generalized Hamming weights of linear codes were first introduced by Wei. These are fundamental parameters related to the minimal overlap structures of the subcodes and very useful in several fields. It was found that the chain condition of a linear code is convenient in studying the generalized Hamming weights of the product codes. In this paper we consider a class of codes defined over some varieties in projective spaces over finite fields, whose generalized Hamming weights can be determined by studying the orbits of subspaces of the projective spaces under the actions of classical groups over finite fields, i.e., the symplectic groups, the unitary groups and orthogonal groups. We give the weight hierarchies and generalized weight spectra of the codes from Hermitian varieties and prove that the codes satisfy the chain condition.

  15. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  16. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    ERIC Educational Resources Information Center

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  17. Ducted-Fan Engine Acoustic Predictions Using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor, temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier Stokes codes can be used both to generate and propagate rotor-stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  18. Ducted-Fan Engine Acoustic Predictions using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor; temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier-Stokes codes can be used both to generate and propagate rotor/stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  19. Manually operated coded switch

    DOEpatents

    Barnette, Jon H.

    1978-01-01

    The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.

  20. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.