Science.gov

Sample records for a codes

  1. IMP: A performance code

    NASA Astrophysics Data System (ADS)

    Dauro, Vincent A., Sr.

    IMP (Integrated Mission Program) is a simulation language and code used to model present and future Earth, Moon, or Mars missions. The profile is user controlled through selection from a large menu of events and maneuvers. A Fehlberg 7/13 Runge-Kutta integrator with error and step size control is used to numerically integrate the differential equations of motion (DEQ) of three spacecraft, a main, a target, and an observer. Through selection, the DEQ's include guided thrust, oblate gravity, atmosphere drag, solar pressure, and Moon gravity effects. Guide parameters for thrust events and performance parameters of velocity changes (Delta-V) and propellant usage (maximum of five systems) are developed as needed. Print, plot, summary, and debug files are output.

  2. Combinatorial neural codes from a mathematical coding theory perspective.

    PubMed

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli. PMID:23724797

  3. A coded tracking telemetry system

    USGS Publications Warehouse

    Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.

    1989-01-01

    We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.

  4. AEDS Property Classification Code Manual.

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…

  5. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. PMID:24698943

  6. SLINGSHOT - a Coilgun Design Code

    SciTech Connect

    MARDER, BARRY M.

    2001-09-01

    The Sandia coilgun [1,2,3,4,5] is an inductive electromagnetic launcher. It consists of a sequence of powered, multi-turn coils surrounding a flyway of circular cross-section through which a conducting armature passes. When the armature is properly positioned with respect to a coil, a charged capacitor is switched into the coil circuit. The rising coil currents induce a current in the armature, producing a repulsive accelerating force. The basic numerical tool for modeling the coilgun is the SLINGSHOT code, an expanded, user-friendly successor to WARP-10 [6]. SLINGSHOT computes the currents in the coils and armature, finds the forces produced by those currents, and moves the armature through the array of coils. In this approach, the cylindrically symmetric coils and armature are subdivided into concentric hoops with rectangular cross-section, in each of which the current is assumed to be uniform. The ensemble of hoops are treated as coupled circuits. The specific heats and resistivities of the hoops are found as functions of temperature and used to determine the resistive heating. The code calculates the resistances and inductances for all hoops, and the mutual inductances for all hoop pairs. Using these, it computes the hoop currents from their circuit equations, finds the forces from the products of these currents and the mutual inductance gradient, and moves the armature. Treating the problem as a set of coupled circuits is a fast and accurate approach compared to solving the field equations. Its use, however, is restricted to problems in which the symmetry dictates the current paths. This paper is divided into three parts. The first presents a demonstration of the code. The second describes the input and output. The third part describes the physical models and numerical methods used in the code. It is assumed that the reader is familiar with coilguns.

  7. The Proteomic Code: a molecular recognition code for proteins

    PubMed Central

    Biro, Jan C

    2007-01-01

    Background The Proteomic Code is a set of rules by which information in genetic material is transferred into the physico-chemical properties of amino acids. It determines how individual amino acids interact with each other during folding and in specific protein-protein interactions. The Proteomic Code is part of the redundant Genetic Code. Review The 25-year-old history of this concept is reviewed from the first independent suggestions by Biro and Mekler, through the works of Blalock, Root-Bernstein, Siemion, Miller and others, followed by the discovery of a Common Periodic Table of Codons and Nucleic Acids in 2003 and culminating in the recent conceptualization of partial complementary coding of interacting amino acids as well as the theory of the nucleic acid-assisted protein folding. Methods and conclusions A novel cloning method for the design and production of specific, high-affinity-reacting proteins (SHARP) is presented. This method is based on the concept of proteomic codes and is suitable for large-scale, industrial production of specifically interacting peptides. PMID:17999762

  8. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  9. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  10. Why comply with a code of ethics?

    PubMed

    Spielthenner, Georg

    2015-05-01

    A growing number of professional associations and occupational groups are creating codes of ethics with the goal of guiding their members, protecting service users, and safeguarding the reputation of the profession. There is a great deal of literature dealing with the question to what extent ethical codes can achieve their desired objectives. The present paper does not contribute to this debate. Its aim is rather to investigate how rational it is to comply with codes of conduct. It is natural and virtually inevitable for a reflective person to ask why one should pay any attention to ethical codes, in particular if following a code is not in one's own interest. In order to achieve the aim of this paper, I shall (in "Quasi-reasons for complying with an ethical code" section) discuss reasons that only appear to be reasons for complying with a code. In "Code-independent reasons" section, I shall present genuine practical reasons that, however, turn out to be reasons of the wrong kind. In "Code-dependent reasons" section finally presents the most important reasons for complying with ethical codes. The paper argues that while ethical codes do not necessarily yield reasons for action, professionals can have genuine reasons for complying with a code, which may, however, be rather weak and easily overridden by reasons for deviating from the code. PMID:25185873

  11. The chromatin regulatory code: Beyond a histone code

    NASA Astrophysics Data System (ADS)

    Lesne, A.

    2006-03-01

    In this commentary on the contribution by Arndt Benecke in this issue, I discuss why the notion of “chromatin code” introduced and elaborated in this paper is to be preferred to that of “histone code”. Speaking of a code as regards nucleosome conformation and histone tail post-translational modifications only makes sense within the chromatin fiber, where their physico-chemical features can be translated into regulatory programs at the genome level, by means of a complex, multi-level interplay with the fiber architecture and dynamics settled in the course of Evolution. In particular, this chromatin code presumably exploits allosteric transitions of the chromatin fiber. The chromatin structure dependence of its translation suggests two alternative modes of transcription initiation regulation, also proposed in the paper by A. Benecke in this issue for interpreting strikingly bimodal micro-array data.

  12. A Better Handoff for Code Officials

    SciTech Connect

    Conover, David R.; Yerkes, Sara

    2010-09-24

    The U.S. Department of Energy's Building Energy Codes Program has partnered with ICC to release the new Building Energy Codes Resource Guide: Code Officials Edition. We created this binder of practical materials for a simple reason: code officials are busy learning and enforcing several codes at once for the diverse buildings across their jurisdictions. This doesn’t leave much time to search www.energycodes.gov, www.iccsafe.org, or the range of other helpful web-based resources for the latest energy codes tools, support, and information. So, we decided to bring the most relevant materials to code officials in a way that works best with their daily routine, and point to where they can find even more. Like a coach’s game plan, the Resource Guide is an "energy playbook" for code officials.

  13. SPINK, A Thin Elements Spin Tracking Code

    SciTech Connect

    Luccio, Alfredo U.

    2009-08-04

    Spink is a spin tracking code for spin polarized particles. The code tracks both trajectories in 3D and spin. It works using thick element modeling from MAD and thin element modeling based on the BMT equation to track spin. The code is written in Fortran and typically runs on a Linux platform, either sequentially or MPI-parallel.

  14. A Mathematical Representation of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Hill, Vanessa J.; Rowlands, Peter

    Algebraic and geometric representations of the genetic code are used to show their functions in coding for amino acids. The algebra is a 64-part vector quaternion combination, and the geometry is based on the structure of the regular icosidodecahedron. An almost perfect pattern suggests that this is a biologically significant way of representing the genetic code.

  15. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).

  16. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  17. A code of professional conduct for members.

    PubMed

    2006-09-01

    In light of new legislation and changing practice, together with the impending legal status of members who practise clinical photography and/or clinical videography, the Institute of Medical Illustrators (IMI) has revised and brought together A Code of Responsible Practice and its Code of Conduct. The new document, A Code of Professional Conduct for Members, details the standards required to maintain professional practice. Within the text, the Code refers to members, and where specifically appropriate, to clinical photographers. The title, 'clinical photographer', is used where the code applies to members practising clinical photography and/or videography. PMID:17162339

  18. SL4 code - A user's manual

    NASA Technical Reports Server (NTRS)

    Chou, Y. S.

    1973-01-01

    The SL-4 code is a computer automated scheme for solving the equations describing the fully-coupled viscous, radiating flow over the front face of a blunt body which may or may not be ablating. The code provides a basis for obtaining predictions of the surface beating to a body entering any planetary atmosphere at hyperbolic velocities. The code is written in FORTRAN V and is operational on both the Univac 1108 (EXEC 8) system in use at LMSC and the CDC 7600 system in use at the University of California, Berkeley. An overview of the SL-4 code computational logic flow, a description of the input requirements and output results, and comments on the practical use of the code are presented. As such this report forms a users manual for operation of the SL-4 code.

  19. The VISC code: A user's manual

    NASA Technical Reports Server (NTRS)

    Wilson, K.

    1973-01-01

    The VISC code is a computer automated scheme for solving the equations describing the fully coupled viscous, radiating flow at the stagnation-point of a blunt body which may or may not be ablating. The code provides a basis for obtaining prediction of the stagnation-point heating to a body entering any planetary atmosphere at hyperbolic velocities. The code is written in FORTRAN V and is operational on both the Univac 1108 (EXEC 8) system and the CDC 7600 system. The report gives an overview of the VISC code computational logic flow, a description of the input requirements and output results and comments on the practical use of the code. As such the report forms a users manual for operation of the VISC code.

  20. HERCULES: A Pattern Driven Code Transformation System

    SciTech Connect

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing; Ilsche, Thomas; Joubert, Wayne; Graham, Richard L

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss the design, implementation and an initial evaluation of HERCULES.

  1. Towards a testbed for malicious code detection

    SciTech Connect

    Lo, R.; Kerchen, P.; Crawford, R.; Ho, W.; Crossley, J.; Fink, G.; Levitt, K.; Olsson, R.; Archer, M. . Div. of Computer Science)

    1991-01-01

    This paper proposes an environment for detecting many types of malicious code, including computer viruses, Trojan horses, and time/logic bombs. This malicious code testbed (MCT) is based upon both static and dynamic analysis tools developed at the University of California, Davis, which have been shown to be effective against certain types of malicious code. The testbed extends the usefulness of these tools by using them in a complementary fashion to detect more general cases of malicious code. Perhaps more importantly, the MCT allows administrators and security analysts to check a program before installation, thereby avoiding any damage a malicious program might inflict. 5 refs., 2 figs., 2 tabs.

  2. A Code of Practice for Further Education.

    ERIC Educational Resources Information Center

    Walker, Liz; Turner, Anthea

    This draft is the outcome of a project in which colleges and further education (FE) teacher education providers worked to pilot a code developed by students and staff at Loughborough College in England. The code is intended to be a resource for improving practice and enhancing the standing of the FE sector. It focuses on the essentials, affirms…

  3. MHDust: A 3-fluid dusty plasma code

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel

    MHDust is a next generation 3-fluid magnetized dusty plasma code, treating the inertial dynamics of both the dust and ion components. Coded in ANSI C, the numerical method employs Leap-Frog and Dufort-Frankel integration schemes. Features include: nonlinear collisional terms, quasi-neutrality or continuity based electron densities, and dynamical dust charge number. Tests of wave-mode propagation (Acoustic and Electromagnetic) allow a comparison to linear wave mode theory. Additional nonlinear phenomena are presented including magnetic reconnection and shear-flow instabilities. Relevant parameters for the space environment are considered, allowing a comparison to be made with previous dusty plasma codes (DENISIS). The utility of the code is expanded through the possibility of small dust mass. This allows MH- Dust to be used as a 2-ion plasma code. MHDust considerably expands the range of numerical investigations into nonlinear phenomena in the field of astrophysical dusty plasmas.

  4. A distributed particle simulation code in C++

    SciTech Connect

    Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.

    1992-01-01

    Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.

  5. Source Code Plagiarism--A Student Perspective

    ERIC Educational Resources Information Center

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  6. The Nuremberg Code-A critique.

    PubMed

    Ghooi, Ravindra B

    2011-04-01

    The Nuremberg Code drafted at the end of the Doctor's trial in Nuremberg 1947 has been hailed as a landmark document in medical and research ethics. Close examination of this code reveals that it was based on the Guidelines for Human Experimentation of 1931. The resemblance between these documents is uncanny. It is unfortunate that the authors of the Nuremberg Code passed it off as their original work. There is evidence that the defendants at the trial did request that their actions be judged on the basis of the 1931 Guidelines, in force in Germany. The prosecutors, however, ignored the request and tried the defendants for crimes against humanity, and the judges included the Nuremberg Code as a part of the judgment. Six of ten principles in Nuremberg Code are derived from the 1931 Guidelines, and two of four newly inserted principles are open to misinterpretation. There is little doubt that the Code was prepared after studying the Guidelines, but no reference was made to the Guidelines, for reasons that are not known. Using the Guidelines as a base document without giving due credit is plagiarism; as per our understanding of ethics today, this would be considered unethical. The Nuremberg Code has fallen by the wayside; since unlike the Declaration of Helsinki, it is not regularly reviewed and updated. The regular updating of some ethics codes is evidence of the evolving nature of human ethics. PMID:21731859

  7. A new algorithm for coding geological terminology

    NASA Astrophysics Data System (ADS)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  8. A Fortran 90 code for magnetohydrodynamics

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  9. Report on a workshop concerning code validation

    SciTech Connect

    1996-12-01

    The design of wind turbine components is becoming more critical as turbines become lighter and more dynamically active. Computer codes that will reliably predict turbine dynamic response are, therefore, more necessary than before. However, predicting the dynamic response of very slender rotating structures that operate in turbulent winds is not a simple matter. Even so, codes for this purpose have been developed and tested in North America and in Europe, and it is important to disseminate information on this subject. The purpose of this workshop was to allow those involved in the wind energy industry in the US to assess the progress invalidation of the codes most commonly used for structural/aero-elastic wind turbine simulation. The theme of the workshop was, ``How do we know it`s right``? This was the question that participants were encouraged to ask themselves throughout the meeting in order to avoid the temptation of presenting information in a less-than-critical atmosphere. Other questions posed at the meeting are: What is the proof that the codes used can truthfully represent the field data? At what steps were the codes tested against known solutions, or against reliable field data? How should the designer or user validate results? What computer resources are needed? How do codes being used in Europe compare with those used in the US? How does the code used affect industry certification? What can be expected in the future?

  10. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  11. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.

    1991-01-01

    We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  12. EMPIRE: A code for nuclear astrophysics

    NASA Astrophysics Data System (ADS)

    Palumbo, A.

    2016-01-01

    The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the direct- semidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.

  13. A Subband Coding Method for HDTV

    NASA Technical Reports Server (NTRS)

    Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.

  14. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Lin, S.

    1985-01-01

    A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.

  15. Predictive coding as a model of cognition.

    PubMed

    Spratling, M W

    2016-08-01

    Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562

  16. MACRAD: A mass analysis code for radiators

    SciTech Connect

    Gallup, D.R.

    1988-01-01

    A computer code to estimate and optimize the mass of heat pipe radiators (MACRAD) is currently under development. A parametric approach is used in MACRAD, which allows the user to optimize radiator mass based on heat pipe length, length to diameter ratio, vapor to wick radius, radiator redundancy, etc. Full consideration of the heat pipe operating parameters, material properties, and shielding requirements is included in the code. Preliminary results obtained with MACRAD are discussed.

  17. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  18. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  19. A thesaurus for a neural population code

    PubMed Central

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-01-01

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI: http://dx.doi.org/10.7554/eLife.06134.001 PMID:26347983

  20. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  1. MININEC: A mini-numerical electromagnetics code

    NASA Astrophysics Data System (ADS)

    Julian, A. J.; Logan, J. C.; Rockway, J. W.

    1982-09-01

    An investigation of the merits of techniques that may result in a reduced version of an antenna modeling code applicable to small problems and small computer resources. The result is the identification of one promising numerical approach suggested by Dr DR Wilton of the University of Mississippi. The approach has been coded in BASIC and implemented on a microcomputer. The computer code has been dubbed MININEC (Mini-Numerical Electromagnetics Code). MININEC solves an integral equation relating the electric field and the vector and scalar potentials. The solution involves a modified Galerkin procedure. This formulation results in a compact code suitable for use on a microcomputer. MININEC solves for the impedance and currents on arbitrarily oriented wires including configurations with multiple junctions. Options include lumped impedance loading and far field patterns. MININEC has been written in the BASIC language compatible with many popular microcomputers. MININEC has been implemented on the NOSC Univac 1100/82, the NOSC VAX, a CDI microcomputer, and an Apple microcomputer.

  2. Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom

    ERIC Educational Resources Information Center

    Lin, Zheng

    2012-01-01

    This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…

  3. FREEFALL: A seabed penetrator flight code

    SciTech Connect

    Hickerson, J.

    1988-01-01

    This report presents a one-dimensional model and computer program for predicting the motion of seabed penetrators. The program calculates the acceleration, velocity, and depth of a penetrator as a function of time from the moment of launch until the vehicle comes to rest in the sediment. The code is written in Pascal language for use on a small personal computer. Results are presented as printed tables and graphs. A comparison with experimental data is given which indicates that the accuracy of the code is perhaps as good as current techniques for measuring vehicle performance. 31 refs., 12 figs., 5 tabs.

  4. Student Codes of Conduct: A Guide to Policy Review and Code Development.

    ERIC Educational Resources Information Center

    New Jersey State Dept. of Education, Trenton. Div. of General Academic Education.

    Designed to assist New Jersey school districts in developing and implementing student codes of conduct, this document begins by examining the need for policy and clearly established rules, the rationale for codes of conduct, and the areas that such codes should address. Following a discussion of substantive and procedural rights and sources of…

  5. DUNE - a granular flow code

    SciTech Connect

    Slone, D M; Cottom, T L; Bateson, W B

    2004-11-23

    DUNE was designed to accurately model the spectrum of granular. Granular flow encompasses the motions of discrete particles. The particles are macroscopic in that there is no Brownian motion. The flow can be thought of as a dispersed phase (the particles) interacting with a fluid phase (air or water). Validation of the physical models proceeds in tandem with simple experimental confirmation. The current development team is working toward the goal of building a flexible architecture where existing technologies can easily be integrated to further the capability of the simulation. We describe the DUNE architecture in some detail using physics models appropriate for an imploding liner experiment.

  6. Should managers have a code of conduct?

    PubMed

    Bayliss, P

    1994-02-01

    Much attention is currently being given to values and ethics in the NHS. Issues of accountability are being explored as a consequence of the Cadbury report. The Institute of Health Services Management (IHSM) is considering whether managers should have a code of ethics. Central to this issue is what managers themselves think; the application of such a code may well stand or fall by whether managers are prepared to have ownership of it, and are prepared to make it work. Paul Bayliss reports on a survey of managers' views. PMID:10134423

  7. Code Blue: a family matter?

    PubMed

    Goforth, Rhonda

    2013-01-01

    The focus of this article is to encourage nurses and other healthcare staff to allow family members to be present during a resuscitation event. The author offers rationale, history, and simple guidelines for supporting families during this excruciating experience. PMID:23607158

  8. Performance of some block codes on a Gaussian channel

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.; Mceliece, R. J.

    1975-01-01

    A technique proposed by Chase (1972) is used to evaluate the performance of several fairly long binary block codes on a wideband additive Gaussian channel. Considerations leading to the use of Chase's technique are discussed. Chase's concepts are first applied to the most powerful practical class of binary codes, the BCH codes with Berlekamp's (1972) decoding algorithm. Chase's algorithm is then described along with proposed selection of candidate codes. Results are presented of applying Chase's algorithm to four binary codes: (23, 12) Golay code, (32, 16) second-order Reed-Muller code, (63, 36) 5-error correcting BCH code, and (95, 39) 9-error correcting shortened BCH code. It is concluded that there are many block codes of length not exceeding 100 with extremely attractive maximum likelihood decoding performance on a Gaussian channel. BCH codes decoded via Berlekamp's binary decoding algorithm and Chase's idea are close to being practical competitors to short-constraint length convolutional codes with Viterbi decoding.

  9. A Code of Ethics for Democratic Leadership

    ERIC Educational Resources Information Center

    Molina, Ricardo; Klinker, JoAnn Franklin

    2012-01-01

    Democratic leadership rests on sacred values, awareness, judgement, motivation and courage. Four turning points in a 38-year school administrator's career revealed decision-making in problematic moments stemmed from values in a personal and professional code of ethics. Reflection on practice and theory added vocabulary and understanding to make…

  10. Finding the key to a better code: code team restructure to improve performance and outcomes.

    PubMed

    Prince, Cynthia R; Hines, Elizabeth J; Chyou, Po-Huang; Heegeman, David J

    2014-09-01

    Code teams respond to acute life threatening changes in a patient's status 24 hours a day, 7 days a week. If any variable, whether a medical skill or non-medical quality, is lacking, the effectiveness of a code team's resuscitation could be hindered. To improve the overall performance of our hospital's code team, we implemented an evidence-based quality improvement restructuring plan. The code team restructure, which occurred over a 3-month period, included a defined number of code team participants, clear identification of team members and their primary responsibilities and position relative to the patient, and initiation of team training events and surprise mock codes (simulations). Team member assessments of the restructured code team and its performance were collected through self-administered electronic questionnaires. Time-to-defibrillation, defined as the time the code was called until the start of defibrillation, was measured for each code using actual time recordings from code summary sheets. Significant improvements in team member confidence in the skills specific to their role and clarity in their role's position were identified. Smaller improvements were seen in team leadership and reduction in the amount of extra talking and noise during a code. The average time-to-defibrillation during real codes decreased each year since the code team restructure. This type of code team restructure resulted in improvements in several areas that impact the functioning of the team, as well as decreased the average time-to-defibrillation, making it beneficial to many, including the team members, medical institution, and patients. PMID:24667218

  11. Materials management with a bar code reader.

    PubMed

    Kaplan, R S

    1990-01-01

    A materials management system capable of inventory control, accounting and the automatic recording of supplies for a clinical department has been developed for the George Washington University Hospital Department of Anesthesia. This system combines a microprocessor-based computer for data storage and a hand-held bar code reader to record the bar code scan of each item in the inventory. A relational software program with easy-to-use menus and help keys was written. Bar code information stored for each item includes item number, quantity, date and time of issue. Accumulated bar code scans are loaded into the computer by use of a serial port and then used to update current inventory in the computer. Comparison between current inventory and reorder levels by the computer will initiate automatic printing of appropriate purchase orders. Reorder levels are adjusted regularly, by comparing previous year or month usage to current needs; items already on order, items on back order and delivery lag time are also taken into account. PMID:10104851

  12. TACO: a finite element heat transfer code

    SciTech Connect

    Mason, W.E. Jr.

    1980-02-01

    TACO is a two-dimensional implicit finite element code for heat transfer analysis. It can perform both linear and nonlinear analyses and can be used to solve either transient or steady state problems. Either plane or axisymmetric geometries can be analyzed. TACO has the capability to handle time or temperature dependent material properties and materials may be either isotropic or orthotropic. A variety of time and temperature dependent loadings and boundary conditions are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additionally, TACO has some specialized features such as internal surface conditions (e.g., contact resistance), bulk nodes, enclosure radiation with view factor calculations, and chemical reactive kinetics. A user subprogram feature allows for any type of functional representation of any independent variable. A bandwidth and profile minimization option is also available in the code. Graphical representation of data generated by TACO is provided by a companion post-processor named POSTACO. The theory on which TACO is based is outlined, the capabilities of the code are explained, the input data required to perform an analysis with TACO are described. Some simple examples are provided to illustrate the use of the code.

  13. FLUKA: A Multi-Particle Transport Code

    SciTech Connect

    Ferrari, A.; Sala, P.R.; Fasso, A.; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  14. CHEETAH: A next generation thermochemical code

    SciTech Connect

    Fried, L.; Souers, P.

    1994-11-01

    CHEETAH is an effort to bring the TIGER thermochemical code into the 1990s. A wide variety of improvements have been made in Version 1.0. We have improved the robustness and ease of use of TIGER. All of TIGER`s solvers have been replaced by new algorithms. We find that CHEETAH solves a wider variety of problems with no user intervention (e.g. no guesses for the C-J state) than TIGER did. CHEETAH has been made simpler to use than TIGER; typical use of the code occurs with the new standard run command. CHEETAH will make the use of thermochemical codes more attractive to practical explosive formulators. We have also made an extensive effort to improve over the results of TIGER. CHEETAH`s version of the BKW equation of state (BKWC) is able to accurately reproduce energies from cylinder tests; something that other BKW parameter sets have been unable to do. Calculations performed with BKWC execute very quickly; typical run times are under 10 seconds on a workstation. In the future we plan to improve the underlying science in CHEETAH. More accurate equations of state will be used in the gas and the condensed phase. A kinetics capability will be added to the code that will predict reaction zone thickness. Further ease of use features will eventually be added; an automatic formulator that adjusts concentrations to match desired properties is planned.

  15. Building a Hydrodynamics Code with Kinetic Theory

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Bauer, Wolfgang; Colbry, Dirk; Pickett, Rodney; Strother, Terrance

    2013-08-01

    We report on the development of a test-particle based kinetic Monte Carlo code for large systems and its application to simulate matter in the continuum regime. Our code combines advantages of the Direct Simulation Monte Carlo and the Point-of-Closest-Approach methods to solve the collision integral of the Boltzmann equation. With that, we achieve a high spatial accuracy in simulations while maintaining computational feasibility when applying a large number of test-particles. The hybrid setup of our approach allows us to study systems which move in and out of the hydrodynamic regime, with low and high particle densities. To demonstrate our code's ability to reproduce hydrodynamic behavior we perform shock wave simulations and focus here on the Sedov blast wave test. The blast wave problem describes the evolution of a spherical expanding shock front and is an important verification problem for codes which are applied in astrophysical simulation, especially for approaches which aim to study core-collapse supernovae.

  16. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  17. A Germanium-Based, Coded Aperture Imager

    SciTech Connect

    Ziock, K P; Madden, N; Hull, E; William, C; Lavietes, T; Cork, C

    2001-10-31

    We describe a coded-aperture based, gamma-ray imager that uses a unique hybrid germanium detector system. A planar, germanium strip detector, eleven millimeters thick is followed by a coaxial detector. The 19 x 19 strip detector (2 mm pitch) is used to determine the location and energy of low energy events. The location of high energy events are determined from the location of the Compton scatter in the planar detector and the energy is determined from the sum of the coaxial and planar energies. With this geometry, we obtain useful quantum efficiency in a position-sensitive mode out to 500 keV. The detector is used with a 19 x 17 URA coded aperture to obtain spectrally resolved images in the gamma-ray band. We discuss the performance of the planar detector, the hybrid system and present images taken of laboratory sources.

  18. Towards a biological coding theory discipline.

    SciTech Connect

    May, Elebeoba Eni

    2003-09-01

    How can information required for the proper functioning of a cell, an organism, or a species be transmitted in an error-introducing environment? Clearly, similar to engineering communication systems, biological systems must incorporate error control in their information transmissino processes. if genetic information in the DNA sequence is encoded in a manner similar to error control encoding, the received sequence, the messenger RNA (mRNA) can be analyzed using coding theory principles. This work explores potential parallels between engineering communication systems and the central dogma of genetics and presents a coding theory approach to modeling the process of protein translation initiation. The messenger RNA is viewed as a noisy encoded sequence and the ribosoe as an error control decoder. Decoding models based on chemical and biological characteristics of the ribosome and the ribosome binding site of the mRNA are developed and results of applying the models to the Escherichia coli K-12 are presented.

  19. CAFE: A New Relativistic MHD Code

    NASA Astrophysics Data System (ADS)

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.

    2015-06-01

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  20. TAIR: A transonic airfoil analysis computer code

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.

    1981-01-01

    The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.

  1. LEGO: A modular accelerator design code

    SciTech Connect

    Cai, Y.; Donald, M.; Irwin, J.; Yan, Y.

    1997-08-01

    An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors: TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in the three dimensional space. Several symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and nonlinear case. Currently, the code is used to design and simulate the lattices of the PEP-II. It will also be used for the commissioning.

  2. Xenomicrobiology: a roadmap for genetic code engineering.

    PubMed

    Acevedo-Rocha, Carlos G; Budisa, Nediljko

    2016-09-01

    Biology is an analytical and informational science that is becoming increasingly dependent on chemical synthesis. One example is the high-throughput and low-cost synthesis of DNA, which is a foundation for the research field of synthetic biology (SB). The aim of SB is to provide biotechnological solutions to health, energy and environmental issues as well as unsustainable manufacturing processes in the frame of naturally existing chemical building blocks. Xenobiology (XB) goes a step further by implementing non-natural building blocks in living cells. In this context, genetic code engineering respectively enables the re-design of genes/genomes and proteins/proteomes with non-canonical nucleic (XNAs) and amino (ncAAs) acids. Besides studying information flow and evolutionary innovation in living systems, XB allows the development of new-to-nature therapeutic proteins/peptides, new biocatalysts for potential applications in synthetic organic chemistry and biocontainment strategies for enhanced biosafety. In this perspective, we provide a brief history and evolution of the genetic code in the context of XB. We then discuss the latest efforts and challenges ahead for engineering the genetic code with focus on substitutions and additions of ncAAs as well as standard amino acid reductions. Finally, we present a roadmap for the directed evolution of artificial microbes for emancipating rare sense codons that could be used to introduce novel building blocks. The development of such xenomicroorganisms endowed with a 'genetic firewall' will also allow to study and understand the relation between code evolution and horizontal gene transfer. PMID:27489097

  3. SORD: A New Rupture Dynamics Modeling Code

    NASA Astrophysics Data System (ADS)

    Ely, G.; Minster, B.; Day, S.

    2005-12-01

    We report on our progress in validating our rupture dynamics modeling code, capable of dealing with nonplanar faults and surface topography. The method uses a "mimetic" approach to model spontaneous rupture on a fault within a 3D isotropic anelastic solid, wherein the equations of motion are approximated with a second order Support-Operator method on a logically rectangular mesh. Grid cells are not required to be parallelepipeds, however, so that non-rectangular meshes can be supported to model complex regions. However, for areas in the mesh which are in fact rectangular, the code uses a streamlined version of the algorithm that takes advantage of the simplifications of the operators in such areas. The fault itself is modeled using a double node technique, and the rheology on the fault surface is modeled through a slip-weakening, frictional, internal boundary condition. The Support Operator Rupture Dynamics (SORD) code, was prototyped in MATLAB, and all algorithms have been validated against known (including analytical solutions, eg Kostrov, 1964) solutions or previously validated solutions. This validation effort is conducted in the context of the SCEC Dynamic Rupture model validation effort led by R. Archuleta and R. Harris. Absorbing boundaries at the model edges are handled using the perfectly matched layers method (PML) (Olsen & Marcinkovich, 2003). PML is shown to work extremely well on rectangular meshes. We show that our implementation is also effective on non-rectangular meshes under the restriction that the boundary be planar. For validation of the model we use a variety of test cases using two types of meshes: a rectangular mesh and skewed mesh. The skewed mesh amplifies any biases caused by the Support-Operator method on non-rectangular elements. Wave propagation and absorbing boundaries are tested with a spherical wave source. Rupture dynamics on a planar fault are tested against (1) a Kostrov analytical solution, (2) data from foam rubber scale models

  4. A computer analysis program for interfacing thermal and structural codes

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.; Maffeo, R. J.

    1985-01-01

    A software package has been developed to transfer three-dimensional transient thermal information accurately, efficiently, and automatically from a heat transfer analysis code to a structural analysis code. The code is called three-dimensional TRansfer ANalysis Code to Interface Thermal and Structural codes, or 3D TRANCITS. TRANCITS has the capability to couple finite difference and finite element heat transfer analysis codes to linear and nonlinear finite element structural analysis codes. TRANCITS currently supports the output of SINDA and MARC heat transfer codes directly. It will also format the thermal data output directly so that it is compatible with the input requirements of the NASTRAN and MARC structural analysis codes. Other thermal and structural codes can be interfaced using the transfer module with the neutral heat transfer input file and the neutral temperature output file. The transfer module can handle different elemental mesh densities for the heat transfer analysis and the structural analysis.

  5. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  6. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  7. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  8. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    SciTech Connect

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  9. CORA - A Semiautomatic Coding System Application to the Coding of Markush Formulas

    ERIC Educational Resources Information Center

    Deforeit, Huguette; And Others

    1972-01-01

    A computer system, named CORA, has been devised for coding chemical structures by fragmentation elements. It has been used to encode Markush formulas in patents according to the Ring codes used in the Ringdoc and Pestdoc services and results in an easy, speedy, reliable and inexpensive method. (4 references) (Author)

  10. LOWTHRM: a thermal fluence code. Master's thesis

    SciTech Connect

    Westbrook, C.R.

    1980-03-01

    A Fortran computer program LOWTERM is described for calculating nuclear thermal fluence incident upon a target area. Atmospheric transmissivity factors in the spectral region 0.25 to 28.5 microns are determined through use of the LOWTRAN5 computer code. The program provides a choice of six model atmospheres covering seasonal and latitudinal variations from sea level to 100 km, eight haze models, and accounts for molecular absorption, molecular scattering, and aerosol extinction. Atmospheric refraction, earth curvature effects, thermal scattering, and thermal ground reflection contributions are included.

  11. Visual mismatch negativity: a predictive coding view

    PubMed Central

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control “refractoriness” effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  12. Visual mismatch negativity: a predictive coding view.

    PubMed

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control "refractoriness" effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  13. A Code of Ethics for Referees?

    NASA Astrophysics Data System (ADS)

    Sturrock, Peter A.

    2004-04-01

    I have read with interest the many letters commenting on the pros and cons of anonymity for referees. While I sympathize with writers who have suffered from referees who are incompetent or uncivil, I also sympathize with those who argue that one would simply exchange one set of problems for another if journals were to require that all referees waive anonymity. Perhaps there is a more direct way to address the issue. It may help if guidelines for referees were to include a code of ethics.

  14. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 2; Codes for AWGN and Fading Channels

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, DoJun; Lin, Shu

    1997-01-01

    In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.

  15. CHEETAH: A fast thermochemical code for detonation

    SciTech Connect

    Fried, L.E.

    1993-11-01

    For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producing a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.

  16. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel Aaron Lazerson

    2012-07-27

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The codes is validated against a vacuum shot on the Large Helical Device where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the Large Helical Device (LHD).

  17. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel A. Lazerson, S. Sakakibara and Y. Suzuki

    2013-03-12

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The code is validated against a vacuum shot on the Large Helical Device (LHD) where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the LHD.

  18. A Construction of Lossy Source Code Using LDPC Matrices

    NASA Astrophysics Data System (ADS)

    Miyake, Shigeki; Muramatsu, Jun

    Research into applying LDPC code theory, which is used for channel coding, to source coding has received a lot of attention in several research fields such as distributed source coding. In this paper, a source coding problem with a fidelity criterion is considered. Matsunaga et al. and Martinian et al. constructed a lossy code under the conditions of a binary alphabet, a uniform distribution, and a Hamming measure of fidelity criterion. We extend their results and construct a lossy code under the extended conditions of a binary alphabet, a distribution that is not necessarily uniform, and a fidelity measure that is bounded and additive and show that the code can achieve the optimal rate, rate-distortion function. By applying a formula for the random walk on lattice to the analysis of LDPC matrices on Zq, where q is a prime number, we show that results similar to those for the binary alphabet condition hold for Zq, the multiple alphabet condition.

  19. Containment Fire Simulation by a CFD Code

    SciTech Connect

    Heitsch, Matthias

    2002-07-01

    In the frame of an international collaborative project to evaluate fire models a code benchmark was initiated to better quantify the strengths and weaknesses of the codes involved. CFX has been applied to simulate selected cases of both parts of the benchmark. These simulations are presented and discussed in this paper. In the first part of the benchmark a pool fire just represented by a heat release table is considered. Consequently, the physical fire model within CFX is simple. Radiative heat exchange together with turbulent mixing are involved. Two cases with and without venting of the fire room are compared. The second part of the benchmark requires a more detailed fire model in order to inspect the availability of oxygen locally and to control the fire intensity. Under unvented conditions oxygen starvation is encountered and the fire oscillates. Mechanical ventilation changes this behavior and provides enough oxygen all over the simulation time. The predefined damage criteria to characterize, if a target cable in the fire room would be damaged, are not met. However, surface temperatures predicted are well above the assumed threshold temperatures. A continuation of the work presented is foreseen and will address a more complex physical modeling of relevant fire scenarios. (author)

  20. AMBER: a PIC slice code for DARHT

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Fawley, William

    1999-11-01

    The accelerator for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will produce a 4-kA, 20-MeV, 2-μ s output electron beam with a design goal of less than 1000 π mm-mrad normalized transverse emittance and less than 0.5-mm beam centroid motion. In order to study the beam dynamics throughout the accelerator, we have developed a slice Particle-In-Cell code named AMBER, in which the beam is modeled as a time-steady flow, subject to self, as well as external, electrostatic and magnetostatic fields. The code follows the evolution of a slice of the beam as it propagates through the DARHT accelerator lattice, modeled as an assembly of pipes, solenoids and gaps. In particular, we have paid careful attention to non-paraxial phenomena that can contribute to nonlinear forces and possible emittance growth. We will present the model and the numerical techniques implemented, as well as some test cases and some preliminary results obtained when studying emittance growth during the beam propagation.

  1. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  2. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  3. Python interface generator for Fortran based codes (a code development aid)

    SciTech Connect

    Grote, D. P.

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  4. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2010-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  5. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2011-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  6. What to do with a Dead Research Code

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  7. EUNHA: a New Cosmological Hydrodynamic Simulation Code

    NASA Astrophysics Data System (ADS)

    Shin, Jihye; Kim, Juhan; Kim, Sungsoo S.; Park, Changbom

    2014-06-01

    We develop a parallel cosmological hydrodynamic simulation code designed for the study of formation and evolution of cosmological structures. The gravitational force is calculated using the TreePM method and the hydrodynamics is implemented based on the smoothed particle hydrodynamics. The initial displacement and velocity of simulation particles are calculated according to second-order Lagrangian perturbation theory using the power spectra of dark matter and baryonic matter. The initial background temperature is given by Recfast and the temperature fluctuations at the initial particle position are assigned according to the adiabatic model. We use a time-limiter scheme over the individual time steps to capture shock-fronts and to ease the time-step tension between the shock and preshock particles. We also include the astrophysical gas processes of radiative heating/cooling, star formation, metal enrichment, and supernova feedback. We test the code in several standard cases such as one-dimensional Riemann problems, Kelvin-Helmholtz, and Sedov blast wave instability. Star formation on the galactic disk is investigated to check whether the Schmidt-Kennicutt relation is properly recovered. We also study global star formation history at different simulation resolutions and compare them with observations.

  8. A minimum-error, energy-constrained neural code is an instantaneous-rate code.

    PubMed

    Johnson, Erik C; Jones, Douglas L; Ratnam, Rama

    2016-04-01

    Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals. PMID:26922680

  9. Thinking through the Issues in a Code of Ethics

    ERIC Educational Resources Information Center

    Davis, Michael

    2008-01-01

    In June 2005, seven people met at the Illinois Institute of Technology (IIT) to develop a code of ethics governing all members of the university community. The initial group developed a preamble, that included reasons for establishing such a code and who was to be governed by the code, including rationale for following the guidelines. From this…

  10. A burst-correcting algorithm for Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Chen, J.; Owsley, P.

    1990-01-01

    The Bose, Chaudhuri, and Hocquenghem (BCH) codes form a large class of powerful error-correcting cyclic codes. Among the non-binary BCH codes, the most important subclass is the Reed Solomon (RS) codes. Reed Solomon codes have the ability to correct random and burst errors. It is well known that an (n,k) RS code can correct up to (n-k)/2 random errors. When burst errors are involved, the error correcting ability of the RS code can be increased beyond (n-k)/2. It has previously been show that RS codes can reliably correct burst errors of length greater than (n-k)/2. In this paper, a new decoding algorithm is given which can also correct a burst error of length greater than (n-k)/2.

  11. A new description of combined trellis coding with asymmetric modulation

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1985-01-01

    The combination of rate k/(k+t) trellis codes with digital modulations described by an asymmetric 2 sup k+1-point signal constellation has been recently shown to yield performance improvement over the traditional symmetric constellation combined with the same trellis code. The approach taken is to specify an underlying trellis code and then map the output code symbols into the fixed signal constellation based on a rule called mapping by set partitioning. The latter process is tantamount to assigning signals from the constellation to the trellis code transitions so as to maximize the free Euclidean distance of the code. Recently, a new description of trellis codes has been given that combines the above two steps into one. The ideas introduced are further explored, placing particular emphasis on the optimization of the signal constellation asymmetry. It can be concluded that the trellis-coded amplitude modulation (AM) designs given are very close to being optimum.

  12. A novel RS BTC coding scheme for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Jia, Yue-xing; Hu, Yun-xia

    2012-07-01

    A novel Reed Solomon (RS) block turbo code (BTC) coding scheme of RS(63,58)×RS(63,58) for optical communications is proposed. The simulation results show that the net coding gain (NCG) of this scheme at the sixth iteration is more than that of other coding schemes at the third iteration for the bit error rate (BER) of 10-12. Furthermore, the novel RS BTC has shorter component code and rapider encoding and decoding speed. Therefore, the novel RS BTC coding scheme can be better used in high-speed long-haul optical communication systems, and the novel RS BTC can be regarded as a candidate code of the super forward error correction (super-FEC) code. Moreover, the encoding/decoding design and implementation of the novel RS BTC are also presented

  13. The Problem of Evolving a Genetic Code

    ERIC Educational Resources Information Center

    Woese, Carl R.

    1970-01-01

    Proposes models for the evolution of the genetic code and translation mechanisms. Suggests that the translation process is so complex and precise that it must have evolved in many stages, and that the evolution of the code was influenced by the constraints imposed by the evolving translation mechanism. (EB)

  14. Standardized pill imprint codes: a pharma fantasy.

    PubMed

    Schiff, Gordon

    2004-02-01

    To safely use medications, professionals and consumers need usable and reliable methods to identify tablets patients are prescribed and taking. Currently, each manufacturer assigns its own identifying codes and symbols. Standardization of the system for identifying solid dosage forms is a goal that has been widely advocated, yet stubbornly resistant to progress. Physicians, pharmacists, and consumers attempting to identify pills must use various methods which have shortcomings in ease of use, availability, and accuracy. Arguments have been advanced, particularly by pharmaceutical manufacturers, that evidence of unworkability of the current system is not compelling, and costs of retooling current manufacturing processes could be prohibitive. These issues are currently being explored by a task force led by the U.S. Pharmacopeia Safe Medication Use, and Pharmaceutical Forms Dosage Expert Committees. This paper presents a fictitious case study of an elderly patient succumbing to digoxin overdose illustrating the dilemmas posed in the tablet-imprint debate. PMID:15171065

  15. Performance results for a hybrid coding system.

    NASA Technical Reports Server (NTRS)

    Hoffman, L. B.

    1971-01-01

    Results of computer simulation studies of the hybrid pull-up bootstrap decoding algorithm, using a constraint length 24, nonsystematic, rate 1/2 convolutional code for the symmetric channel with both binary and eight-level quantized outputs. Computational performance was used to measure the effect of several decoder parameters and determine practical operating constraints. Results reveal that the track length may be reduced to 500 information bits with small degradation in performance. The optimum number of tracks per block was found to be in the range from 7 to 11. An effective technique was devised to efficiently allocate computational effort and identify reliably decoded data sections. Long simulations indicate that a practical bootstrap decoding configuration has a computational performance about 1.0 dB better than sequential decoding and an output bit error rate about .0000025 near the R sub comp point.

  16. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  17. Unravelling a histone code for malaria virulence.

    PubMed

    Comeaux, Christy A; Duraisingh, Manoj T

    2007-12-01

    Epigenetic phenomena have been shown to play a role in the regulated expression of virulence genes in several pathogenic organisms, including the var gene family in Plasmodium falciparum. A better understanding of how P. falciparum can both maintain a single active var gene locus through many erythrocytic cycles and also achieve successive switching to different loci in order to evade the host immune system is greatly needed. Disruption of this tightly co-ordinated expression system presents an opportunity for increased clearance of the parasites by the immune system and, in turn, reduced mortality and morbidity. In the current issue of Molecular Microbiology, Lopez-Rubio and colleagues investigate the correlation of specific post-translational histone modifications with different transcriptional states of a single var gene, var2csa. Quantitative chromatin immunoprecipitation is used to demonstrate that different histone methylation marks are enriched at the 5' flanking and coding regions of active, poised or silenced var genes. They identify an increase of H3K4me2 and H3K4me3 in the 5' flanking region of an active var locus and expand on an earlier finding that H3K9me3 is enriched in the coding regions of silenced var genes. The authors also present evidence that H3K4me2 bookmarks the active var gene locus during later developmental stages for expression in the subsequent asexual cycle, hinting at a potential mechanism for transcriptional 'memory'. The stage is now set for work generating a complete catalogue of all histone modifications associated with var gene regulation as well as functional studies striving to uncover the precise mechanisms underlying these observations. PMID:18028316

  18. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  19. Deciphering a neural code for vision.

    PubMed

    Passaglia, C; Dodge, F; Herzog, E; Jackson, S; Barlow, R

    1997-11-11

    Deciphering the information that eyes, ears, and other sensory organs transmit to the brain is important for understanding the neural basis of behavior. Recordings from single sensory nerve cells have yielded useful insights, but single neurons generally do not mediate behavior; networks of neurons do. Monitoring the activity of all cells in a neural network of a behaving animal, however, is not yet possible. Taking an alternative approach, we used a realistic cell-based model to compute the ensemble of neural activity generated by one sensory organ, the lateral eye of the horseshoe crab, Limulus polyphemus. We studied how the neural network of this eye encodes natural scenes by presenting to the model movies recorded with a video camera mounted above the eye of an animal that was exploring its underwater habitat. Model predictions were confirmed by simultaneously recording responses from single optic nerve fibers of the same animal. We report here that the eye transmits to the brain robust "neural images" of objects having the size, contrast, and motion of potential mates. The neural code for such objects is not found in ambiguous messages of individual optic nerve fibers but rather in patterns of coherent activity that extend over small ensembles of nerve fibers and are bound together by stimulus motion. Integrative properties of neurons in the first synaptic layer of the brain appear well suited to detecting the patterns of coherent activity. Neural coding by this relatively simple eye helps explain how horseshoe crabs find mates and may lead to a better understanding of how more complex sensory organs process information. PMID:9356504

  20. HIDUTYDRV Code, A Fuel Product Margin Tool

    SciTech Connect

    Krammen, Michael A.; Karoutas, Zeses E.; Grill, Steven F.; Sutharshan, Balendra

    2007-07-01

    HIDUTYDRV is a computer code currently used in core design to model the best estimate steady-state fuel rod corrosion performance for Westinghouse's CE-design 14x14 and 16x16 fuel. The fuel rod oxide thickness, sub-cooled nucleate boiling (referred to as mass evaporation or steaming), and fuel duty indices can be predicted for individual rods or up to every fuel rod in the quarter core at every nuclear fuel management depletion time-step as a function of axial elevation within the core. Best estimate operating margins for fuel components whose performance depends on the local power and thermal hydraulic conditions are candidates for analysis with HIDUTYDRV. HIDUTYDRV development will focus on fuel component parameters associated with known leakers for addressing INPO goals to eliminate fuel leakers by 2010. (authors)

  1. Toward a Code of Conduct for Graduate Education

    ERIC Educational Resources Information Center

    Proper, Eve

    2012-01-01

    Most academic disciplines promulgate codes of ethics that serve as public statements of professional norms of their membership. These codes serve both symbolic and practical purposes, stating to both members and the larger public what a discipline's highest ethics are. This article explores what scholarly society codes of ethics could say about…

  2. A Code for Probabilistic Safety Assessment

    SciTech Connect

    1997-10-10

    An integrated fault-event tree software package PSAPACK was developed for level-1 PSA using personal computers. It is a menu driven interactive modular system which permits different choices, depending on the user's purposes and needs. The event tree development module is capable of developing the logic accident sequences based on the user's specified relations between event tree headings. Identification of success sequences and core damage sequences is done automatically by the code based on the success function input by the user. It links minimum cut sets (MCS) from system fault trees and performs the Boolean reduction. It can also retrieve data from the reliability data base to perform the quantification of accident sequences.

  3. A Code for Probabilistic Safety Assessment

    Energy Science and Technology Software Center (ESTSC)

    1997-10-10

    An integrated fault-event tree software package PSAPACK was developed for level-1 PSA using personal computers. It is a menu driven interactive modular system which permits different choices, depending on the user's purposes and needs. The event tree development module is capable of developing the logic accident sequences based on the user's specified relations between event tree headings. Identification of success sequences and core damage sequences is done automatically by the code based on the successmore » function input by the user. It links minimum cut sets (MCS) from system fault trees and performs the Boolean reduction. It can also retrieve data from the reliability data base to perform the quantification of accident sequences.« less

  4. A Proposed Code Of Ethics For Infrared Thermographic Professionals

    NASA Astrophysics Data System (ADS)

    Roberts, Charles C.

    1987-05-01

    The American Heritage Dictionary defines ethics as "The general study of morals and of specific moral choices to be made by the individual in his relationship with others". A code of ethics defines these moral relationships to encourage integrity throughout a profession. A defined code of ethics often yields credibility to an organization or association of professionals. This paper outlines a proposed code of ethics for practitioners in the infrared thermographic field. The proposed code covers relationships with the public, clients, other professionals and employers. The proposed code covers credentials, capabilities, thermograms, compensation and safety.

  5. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1992-01-01

    The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.

  6. Benchmark study between FIDAP and a cellular automata code

    SciTech Connect

    Akau, R.L.; Stockman, H.W.

    1991-01-01

    A fluid flow benchmark exercise was conducted to compare results between a cellular automata code and FIDAP. Cellular automata codes are free from gridding constraints, and are generally used to model slow (Reynolds number {approx} 1) flows around complex solid obstacles. However, the accuracy of cellular automata codes at higher Reynolds numbers, where inertial terms are significant, is not well-documented. In order to validate the cellular automata code, two fluids problems were investigated. For both problems, flow was assumed to be laminar, two-dimensional, isothermal, incompressible and periodic. Results showed that the cellular automata code simulated the overall behavior of the flow field. 7 refs., 12 figs.

  7. A grouped binary time code for telemetry and space applications

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    A computer oriented time code designed for users with various time resolution requirements is presented. It is intended as a time code for spacecraft and ground applications where direct code compatibility with automatic data processing equipment is of primary consideration. The principal features of this time code are: byte oriented format, selectable resolution options (from seconds to nanoseconds); and long ambiguity period. The time code is compatible with the new data handling and management concepts such as the NASA End-to-End Data System and the Telemetry Data Packetization format.

  8. A concatenated coded modulation scheme for error control (addition 2)

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1988-01-01

    A concatenated coded modulation scheme for error control in data communications is described. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and a relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is particularly effective for high-speed satellite communications for large file transfer where high reliability is required. This paper also presents a simple method for constructing block codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft-decision Viterbi decoding algorithm. Furthermore, some of these codes are phase invariant under multiples of 45 deg rotation.

  9. A concatenated coded modulation scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1988-01-01

    A concatenated coded modulation scheme for error control in data communications is presented. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and a relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is very effective for high speed satellite communications for large file transfer where high reliability is required. A simple method is also presented for constructing codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft decision Viterbi decoding algorithm. Furthermore, some of these codes are phase invariant under multiples of 45 deg rotation.

  10. Circular code motifs in transfer and 16S ribosomal RNAs: a possible translation code in genes.

    PubMed

    Michel, Christian J

    2012-04-01

    In 1996, a common trinucleotide circular code, called X, is identified in genes of eukaryotes and prokaryotes (Arquès and Michel, 1996). This circular code X is a set of 20 trinucleotides allowing the reading frames in genes to be retrieved locally, i.e. anywhere in genes and in particular without start codons. This reading frame retrieval needs a window length l of 12 nucleotides (l ≥ 12). With a window length strictly less than 12 nucleotides (l < 12), some words of X, called ambiguous words, are found in the shifted frames (the reading frame shifted by one or two nucleotides) preventing the reading frame in genes to be retrieved. Since 1996, these ambiguous words of X were never studied. In the first part of this paper, we identify all the ambiguous words of the common trinucleotide circular code X. With a length l varying from 1 to 11 nucleotides, the type and the occurrence number (multiplicity) of ambiguous words of X are given in each shifted frame. Maximal ambiguous words of X, words which are not factors of another ambiguous words, are also determined. Two probability definitions based on these results show that the common trinucleotide circular code X retrieves the reading frame in genes with a probability of about 90% with a window length of 6 nucleotides, and a probability of 99.9% with a window length of 9 nucleotides (100% with a window length of 12 nucleotides, by definition of a circular code). In the second part of this paper, we identify X circular code motifs (shortly X motifs) in transfer RNA and 16S ribosomal RNA: a tRNA X motif of 26 nucleotides including the anticodon stem-loop and seven 16S rRNA X motifs of length greater or equal to 15 nucleotides. Window lengths of reading frame retrieval with each trinucleotide of these X motifs are also determined. Thanks to the crystal structure 3I8G (Jenner et al., 2010), a 3D visualization of X motifs in the ribosome shows several spatial configurations involving mRNA X motifs, A-tRNA and E-tRNA X

  11. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    SciTech Connect

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  12. An Improved Canine Genome and a Comprehensive Catalogue of Coding Genes and Non-Coding Transcripts

    PubMed Central

    Hoeppner, Marc P.; Lundquist, Andrew; Pirun, Mono; Meadows, Jennifer R. S.; Zamani, Neda; Johnson, Jeremy; Sundström, Görel; Cook, April; FitzGerald, Michael G.; Swofford, Ross; Mauceli, Evan; Moghadam, Behrooz Torabi; Greka, Anna; Alföldi, Jessica; Abouelleil, Amr; Aftuck, Lynne; Bessette, Daniel; Berlin, Aaron; Brown, Adam; Gearin, Gary; Lui, Annie; Macdonald, J. Pendexter; Priest, Margaret; Shea, Terrance; Turner-Maier, Jason; Zimmer, Andrew; Lander, Eric S.; di Palma, Federica

    2014-01-01

    The domestic dog, Canis familiaris, is a well-established model system for mapping trait and disease loci. While the original draft sequence was of good quality, gaps were abundant particularly in promoter regions of the genome, negatively impacting the annotation and study of candidate genes. Here, we present an improved genome build, canFam3.1, which includes 85 MB of novel sequence and now covers 99.8% of the euchromatic portion of the genome. We also present multiple RNA-Sequencing data sets from 10 different canine tissues to catalog ∼175,000 expressed loci. While about 90% of the coding genes previously annotated by EnsEMBL have measurable expression in at least one sample, the number of transcript isoforms detected by our data expands the EnsEMBL annotations by a factor of four. Syntenic comparison with the human genome revealed an additional ∼3,000 loci that are characterized as protein coding in human and were also expressed in the dog, suggesting that those were previously not annotated in the EnsEMBL canine gene set. In addition to ∼20,700 high-confidence protein coding loci, we found ∼4,600 antisense transcripts overlapping exons of protein coding genes, ∼7,200 intergenic multi-exon transcripts without coding potential, likely candidates for long intergenic non-coding RNAs (lincRNAs) and ∼11,000 transcripts were reported by two different library construction methods but did not fit any of the above categories. Of the lincRNAs, about 6,000 have no annotated orthologs in human or mouse. Functional analysis of two novel transcripts with shRNA in a mouse kidney cell line altered cell morphology and motility. All in all, we provide a much-improved annotation of the canine genome and suggest regulatory functions for several of the novel non-coding transcripts. PMID:24625832

  13. A code generation framework for the ALMA common software

    NASA Astrophysics Data System (ADS)

    Troncoso, Nicolás; von Brand, Horst H.; Ibsen, Jorge; Mora, Matias; Gonzalez, Victor; Chiozzi, Gianluca; Jeram, Bogdan; Sommer, Heiko; Zamora, Gabriel; Tejeda, Alexis

    2010-07-01

    Code generation helps in smoothing the learning curve of a complex application framework and in reducing the number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model Driven code generation to transform directly an UML Model into a full implementation in the ACS framework. This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are: software reuse, implicit application of design patterns and automatic tests generation. A model driven approach to design makes it also possible using the same model with different frameworks, by generating for different targets. The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator. OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct mapping of data types, the main difficulty encountered in the translation process. The output is an ACS application readily usable by the developer, including the necessary deployment configuration, thus minimizing any configuration burden during testing. The specific application code is implemented by extending generated classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process and aiding the developers by keeping a clean logical separation between the two. Our first results show that code generation improves dramatically the code productivity.

  14. A code inspection process for security reviews

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele

    2010-04-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  15. A code inspection process for security reviews

    SciTech Connect

    Garzoglio, Gabriele; /Fermilab

    2009-05-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  16. A concatenated coded modulation scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, Tadao; Lin, Shu

    1988-01-01

    A concatenated coded modulation scheme for error control in data communications is presented. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is particularly effective for high speed satellite communication for large file transfer where high reliability is required. Also presented is a simple method for constructing block codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft decision Viterbi decoding algorithm.

  17. A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Su, T. Y.; Kao, T. J.

    1991-01-01

    This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.

  18. Python interface generator for Fortran based codes (a code development aid)

    Energy Science and Technology Software Center (ESTSC)

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling codemore » can be written in the much more versatile Python language.« less

  19. A-to-I editing of coding and non-coding RNAs by ADARs

    PubMed Central

    Nishikura, Kazuko

    2016-01-01

    Adenosine deaminases acting on RNA (ADARs) convert adenosine to inosine in double-stranded RNA. This A-to-I editing occurs not only in protein-coding regions of mRNAs, but also frequently in non-coding regions that contain inverted Alu repeats. Editing of coding sequences can result in the expression of functionally altered proteins that are not encoded in the genome, whereas the significance of Alu editing remains largely unknown. Certain microRNA (miRNA) precursors are also edited, leading to reduced expression or altered function of mature miRNAs. Conversely, recent studies indicate that ADAR1 forms a complex with Dicer to promote miRNA processing, revealing a new function of ADAR1 in the regulation of RNA interference. PMID:26648264

  20. A Semantic Analysis Method for Scientific and Engineering Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  1. RAYS: a geometrical optics code for EBT

    SciTech Connect

    Batchelor, D.B.; Goldfinger, R.C.

    1982-04-01

    The theory, structure, and operation of the code are described. Mathematical details of equilibrium subroutiones for slab, bumpy torus, and tokamak plasma geometry are presented. Wave dispersion and absorption subroutines are presented for frequencies ranging from ion cyclotron frequency to electron cyclotron frequency. Graphics postprocessors for RAYS output data are also described.

  2. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  3. Arithmetic coding as a non-linear dynamical system

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Vaidya, Prabhakar G.; Bhat, Kishor G.

    2009-04-01

    In order to perform source coding (data compression), we treat messages emitted by independent and identically distributed sources as imprecise measurements (symbolic sequence) of a chaotic, ergodic, Lebesgue measure preserving, non-linear dynamical system known as Generalized Luröth Series (GLS). GLS achieves Shannon's entropy bound and turns out to be a generalization of arithmetic coding, a popular source coding algorithm, used in international compression standards such as JPEG2000 and H.264. We further generalize GLS to piecewise non-linear maps (Skewed-nGLS). We motivate the use of Skewed-nGLS as a framework for joint source coding and encryption.

  4. PLASIM: A computer code for simulating charge exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.

    1982-01-01

    The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.

  5. A low complexity prioritized bit-plane coding for SNR scalability in MPEG-21 scalable video coding

    NASA Astrophysics Data System (ADS)

    Peng, Wen-Hsiao; Chiang, Tihao; Hang, Hsueh-Ming

    2005-07-01

    In this paper, we propose a low complexity prioritized bit-plane coding scheme to improve the rate-distortion performance of cyclical block coding in MPEG-21 scalable video coding. Specifically, we use a block priority assignment algorithm to firstly transmit the symbols and the blocks with potentially better rate-distortion performance. Different blocks are allowed to be coded unequally in a coding cycle. To avoid transmitting priority overhead, the encoder and the decoder refer to the same context to assign priority. Furthermore, to reduce the complexity, the priority assignment is done by a look-up-table and the coding of each block is controlled by a simple threshold comparison mechanism. Experimental results show that our prioritized bit-plane coding scheme can offer up to 0.5dB PSNR improvement over the cyclical block coding described in the joint scalable verification model (JSVM).

  6. CALMAR: A New Versatile Code Library for Adjustment from Measurements

    NASA Astrophysics Data System (ADS)

    Grégoire, G.; Fausser, C.; Destouches, C.; Thiollay, N.

    2016-02-01

    CALMAR, a new library for adjustment has been developed. This code performs simultaneous shape and level adjustment of an initial prior spectrum from measured reactions rates of activation foils. It is written in C++ using the ROOT data analysis framework,with all linear algebra classes. STAYSL code has also been reimplemented in this library. Use of the code is very flexible : stand-alone, inside a C++ code, or driven by scripts. Validation and test cases are under progress. Theses cases will be included in the code package that will be available to the community. Future development are discussed. The code should support the new Generalized Nuclear Data (GND) format. This new format has many advantages compared to ENDF.

  7. Source Term Code Package: a user's guide (Mod 1)

    SciTech Connect

    Gieseke, J.A.; Cybulskis, P.; Jordan, H.; Lee, K.W.; Schumacher, P.M.; Curtis, L.A.; Wooton, R.O.; Quayle, S.F.; Kogan, V.

    1986-07-01

    As part of a major reassessment of the release of radioactive materials to the environment (source terms) in severe reactor accidents, a group of state-of-the-art computer codes was utilized to perform extensive analyses. A major product of this source term reassessment effort was a demonstrated methodology for analyzing specific accident situations to provide source term predictions. The computer codes forming this methodology have been upgraded and modified for release and further use. This system of codes has been named the Source Term Code Package (STCP) and is the subject of this user's guide. The guide is intended to provide an understanding of the STCP structure and to facilitate STCP use. The STCP was prepared for operation on a CDC system but is written in FORTRAN-77 to permit transportability. In the current version (Mod 1) of the STCP, the various calculational elements fall into four major categories represented by the codes MARCH3, TRAP-MELT3, VANESA, and NAUA/SPARC/ICEDF. The MARCH3 code is a combination of the MARCH2, CORSOR-M, and CORCON-Mod 2 codes. The TRAP-MELT3 code is a combination of the TRAP-MELT2.0 and MERGE codes.

  8. RESRAD-CHEM: A computer code for chemical risk assessment

    SciTech Connect

    Cheng, J.J.; Yu, C.; Hartmann, H.M.; Jones, L.G.; Biwer, B.M.; Dovel, E.S.

    1993-10-01

    RESRAD-CHEM is a computer code developed at Argonne National Laboratory for the U.S. Department of Energy to evaluate chemically contaminated sites. The code is designed to predict human health risks from multipathway exposure to hazardous chemicals and to derive cleanup criteria for chemically contaminated soils. The method used in RESRAD-CHEM is based on the pathway analysis method in the RESRAD code and follows the U.S. Environmental Protection Agency`s (EPA`s) guidance on chemical risk assessment. RESRAD-CHEM can be used to evaluate a chemically contaminated site and, in conjunction with the use of the RESRAD code, a mixed waste site.

  9. Documentation for RISKIN: A risk integration code for MACCS (MELCOR Accident Consequence Code System) output

    SciTech Connect

    Rollstin, J.A. ); Hong, Kou-John )

    1990-11-01

    This document has been prepared as a user's guide for the computer program RISKIN developed at Sandia National Laboratories. The RISKIN code generates integrated risk tables and the weighted mean risk associated with a user-selected set of consequences from up to five output files generated by the MELCOR Accident Consequence Code System (MACCS). Each MACCS output file can summarize the health and economic consequences resulting from up to 60 distinct severe accident source terms. Since the accident frequency associated with these source terms is not included as a MACCS input parameter a postprocessor is required to derived results that must incorporate accident frequency. The RISKIN code is such a postprocessor. RISKIN will search the MACCS output files for the mean and peak consequence values and the complementary cumulative distributive function (CCDF) tables for each requested consequence. Once obtained, RISKIN combines this data with accident frequency data to produce frequency weighted results. A postprocessor provides RISKIN an interface to the proprietary DISSPLA plot package. The RISKIN code has been written using ANSI Standard FORTRAN 77 to maximize its portability.

  10. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  11. Bar-Code System for a Microbiological Laboratory

    NASA Technical Reports Server (NTRS)

    Law, Jennifer; Kirschner, Larry

    2007-01-01

    A bar-code system has been assembled for a microbiological laboratory that must examine a large number of samples. The system includes a commercial bar-code reader, computer hardware and software components, plus custom-designed database software. The software generates a user-friendly, menu-driven interface.

  12. The Creation and Implementation of a Student Civility Code

    ERIC Educational Resources Information Center

    Lucas, John J.; Rolden-Scheib, Gloria

    2006-01-01

    This paper examines the design and implementation of a student civility code at a regional campus of a Big Ten University. The paper also provides some guidelines to address student incivility in both the classroom and service offices throughout a higher education institution. The communication of such a student code to promote civility was…

  13. A bandwidth efficient coding scheme for the Hubble Space Telescope

    NASA Astrophysics Data System (ADS)

    Pietrobon, Steven S.; Costello, Daniel J., Jr.

    1991-11-01

    As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.

  14. A bandwidth efficient coding scheme for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J., Jr.

    1991-01-01

    As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.

  15. Ethical codes for attorneys: a brief introduction.

    PubMed

    Zarkowski, P

    1997-01-01

    Ethical standards for lawyers are contained in the Model Rules of Professional Conduct (which lays out both "shall/shall not" rules and "may" suggestions in nine broad areas) and the Model Code of Professional Responsibility (which covers essentially the same topic areas but offers more detailed commentary). Topics included in the Rules are the client-lawyer relationship, the attorney's role as an advocate and counselor, law firms and associations, public service, transactions with individuals other than clients and information about legal services including advertising, firm names, and letterhead. The American Dental Association's Principles of Ethics and Code of Professional Conduct is organized around the five ethical principles of patient autonomy, nonmaleficence, beneficence, justice, and veracity. There are substantial similarities in intent between the ethical standards of dentists and lawyers; there are also differences. PMID:9270220

  16. CESAR: A Code for Nuclear Fuel and Waste Characterisation

    SciTech Connect

    Vidal, J.M.; Grouiller, J.P.; Launay, A.; Berthion, Y.; Marc, A.; Toubon, H.

    2006-07-01

    CESAR (Simplified Evolution Code Applied to Reprocessing) is a depletion code developed through a joint program between CEA and COGEMA. In the late 1980's, the first use of this code dealt with nuclear measurement at the Laboratories of the La Hague reprocessing plant. The use of CESAR was then extended to characterizations of all entrance materials and for characterisation, via tracer, of all produced waste. The code can distinguish more than 100 heavy nuclides, 200 fission products and 100 activation products, and it can characterise both the fuel and the structural material of the fuel. CESAR can also make depletion calculations from 3 months to 1 million years of cooling time. Between 2003-2005, the 5. version of the code was developed. The modifications were related to the harmonisation of the code's nuclear data with the JEF2.2 nuclear data file. This paper describes the code and explains the extensive use of this code at the La Hague reprocessing plant and also for prospective studies. The second part focuses on the modifications of the latest version, and describes the application field and the qualification of the code. Many companies and the IAEA use CESAR today. CESAR offers a Graphical User Interface, which is very user-friendly. (authors)

  17. Eighteen rules for writing a code of professional ethics.

    PubMed

    Davis, Michael

    2007-06-01

    Most professional societies, scientific associations, and the like that undertake to write a code of ethics do so using other codes as models but without much (practical) guidance about how to do the work. The existing literature on codes is much more concerned with content than procedure. This paper adds to guidance already in the literature what I learned from participating in the writing of an important code of ethics. The guidance is given in the form of "rules" each of which is explained and (insofar as possible) justified. The emphasis is on procedure. PMID:17717731

  18. A New Detailed Term Accounting Opacity Code: TOPAZ

    SciTech Connect

    Iglesias, C A; Chen, M H; Isaacs, W; Sonnad, V; Wilson, B G

    2004-04-28

    A new opacity code, TOPAZ, which explicitly includes configuration term structure in the bound-bound transitions is being developed. The goal is to extend the current capabilities of detailed term accounting opacity codes such as OPAL that are limited to lighter elements of astrophysical interest. At present, opacity calculations of heavier elements use statistical methods that rely on the presence of myriad spectral lines for accuracy. However, statistical approaches have been shown to be inadequate for astrophysical opacity calculations. An application of the TOPAZ code will be to study the limits of statistical methods. Comparisons of TOPAZ to other opacity codes as well as experiments are presented.

  19. IGB grid: User's manual (A turbomachinery grid generation code)

    NASA Technical Reports Server (NTRS)

    Beach, T. A.; Hoffman, G.

    1992-01-01

    A grid generation code called IGB is presented for use in computational investigations of turbomachinery flowfields. It contains a combination of algebraic and elliptic techniques coded for use on an interactive graphics workstation. The instructions for use and a test case are included.

  20. In search of a 2-dB coding gain

    NASA Technical Reports Server (NTRS)

    Yuen, J. H.; Vo, Q. D.

    1985-01-01

    A recent code search found a (15,1/5), a (14,1/6), and a (15,1/6) convolutional code which, when concatenated with a 10-bit (1023,959) Reed-Solomon (RS) code, achieves a bit-error rate (BER) of 0.000001 at a bit signal-to-noise ratio (SNR) of 0.50 dB, 0.47 dB and 0.42 B, respectively. All of these three codes outperform the Voyager communication system, our baseline, which achieves a BER of 10.000001 at bit SNR of 2.53 db, by more than 2 dB. The 2 dB coding improvement goal was exceeded.

  1. Coded source neutron imaging with a MURA mask

    NASA Astrophysics Data System (ADS)

    Zou, Y. B.; Schillinger, B.; Wang, S.; Zhang, X. S.; Guo, Z. Y.; Lu, Y. R.

    2011-09-01

    In coded source neutron imaging the single aperture commonly used in neutron radiography is replaced with a coded mask. Using a coded source can improve the neutron flux at the sample plane when a very high L/ D ratio is needed. The coded source imaging is a possible way to reduce the exposure time to get a neutron image with very high L/ D ratio. A 17×17 modified uniformly redundant array coded source was tested in this work. There are 144 holes of 0.8 mm diameter on the coded source. The neutron flux from the coded source is as high as from a single 9.6 mm aperture, while its effective L/ D is the same as in the case of a 0.8 mm aperture. The Richardson-Lucy maximum likelihood algorithm was used for image reconstruction. Compared to an in-line phase contrast neutron image taken with a 1 mm aperture, it takes much less time for the coded source to get an image of similar quality.

  2. Roadmap Toward a Predictive Performance-based Commercial Energy Code

    SciTech Connect

    Rosenberg, Michael I.; Hart, Philip R.

    2014-10-01

    Energy codes have provided significant increases in building efficiency over the last 38 years, since the first national energy model code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, and the inability to handle control optimization that is specific to building type and use. This paper provides a high level review of different options for energy codes, including prescriptive, prescriptive packages, EUI Target, outcome-based, and predictive performance approaches. This paper also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria. A vision is outlined to serve as a roadmap for future commercial code development. That vision is based on code development being led by a specific approach to predictive energy performance combined with building specific prescriptive packages that are designed to be both cost-effective and to achieve a desired level of performance. Compliance with this new approach can be achieved by either meeting the performance target as demonstrated by whole building energy modeling, or by choosing one of the prescriptive packages.

  3. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....C. 552(a) and 1 CFR part 51. Copies of the ASME Boiler and Pressure Vessel Code, the ASME Code for....gov/federal-register/cfr/ibr-locations.html. (1) As used in this section, references to Section III... accordance with 10 CFR part 50, Appendix J, Option A or Option B on which the applicant's or...

  4. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....C. 552(a) and 1 CFR part 51. Copies of the ASME Boiler and Pressure Vessel Code, the ASME Code for....gov/federal-register/cfr/ibr-locations.html. (1) As used in this section, references to Section III... accordance with 10 CFR part 50, Appendix J, Option A or Option B on which the applicant's or...

  5. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....C. 552(a) and 1 CFR part 51. Copies of the ASME Boiler and Pressure Vessel Code, the ASME Code for.../federal-register/cfr/ibr-locations.html. (1) As used in this section, references to Section III refer to... accordance with 10 CFR part 50, Appendix J, Option A or Option B on which the applicant's or...

  6. Rationale for Student Dress Codes: A Review of School Handbooks

    ERIC Educational Resources Information Center

    Freeburg, Elizabeth W.; Workman, Jane E.; Lentz-Hees, Elizabeth S.

    2004-01-01

    Through dress codes, schools establish rules governing student appearance. This study examined stated rationales for dress and appearance codes in secondary school handbooks; 182 handbooks were received. Of 150 handbooks containing a rationale, 117 related dress and appearance regulations to students' right to a non-disruptive educational…

  7. A Program Evaluation of Classroom Data Collection with Bar Codes.

    ERIC Educational Resources Information Center

    Saunders, Muriel D.; And Others

    1993-01-01

    A special education record-keeping system using bar code symbols and optical scanners is described. Bar code symbols were created for each Individualized Educational Plan objective, and symbols are scanned when students emit targeted behaviors. A weekly printed report of student performance is produced. Advantages, disadvantages, and costs are…

  8. Code-Mixing as a Bilingual Instructional Strategy

    ERIC Educational Resources Information Center

    Jiang, Yih-Lin Belinda; García, Georgia Earnest; Willis, Arlette Ingram

    2014-01-01

    This study investigated code-mixing practices, specifically the use of L2 (English) in an L1 (Chinese) class in a U.S. bilingual program. Our findings indicate that the code-mixing practices made and prompted by the teacher served five pedagogical functions: (a) to enhance students' bilingualism and bilingual learning, (b) to review and…

  9. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  10. The RCVS codes of conduct: what's in a word?

    PubMed

    McCulloch, Steven; Reiss, Michael; Jinman, Peter; Wathes, Christopher

    2014-01-18

    In 2012, the RCVS introduced a new Code of Professional Conduct for Veterinary Surgeons, replacing the Guide to Professional Conduct which had existed until then. Is a common Code relevant for the veterinarian's many roles? There's more to think about here than just the change of name, write Steven McCulloch, Michael Reiss, Peter Jinman and Christopher Wathes. PMID:24443467

  11. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  12. A novel super-FEC code based on concatenated code for high-speed long-haul optical communication systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jianguo; Ye, Wenwei; Jiang, Ze; Mao, Youju; Wang, Wei

    2007-05-01

    The structures of the novel super forward error correction (Super-FEC) code type based on the concatenated code for high-speed long-haul optical communication systems are studied in this paper. The Reed-Solomon (RS) (255, 239) + Bose-Chaudhuri-Hocguenghem (BCH) (1023, 963) concatenated code is presented after the characteristics of the concatenated code and the two Super-FEC code type presented in ITU-T G.975.1 have theoretically been analyzed, the simulation result shows that this novel code type, compared with the RS (255, 239) + convolutional-self-orthogonal-code (CSOC) ( k0/ n0 = 6/7, J = 8) code in ITU-T G.975.1, has a lower redundancy and better error-correction capabilities, and its net coding gain (NCG) at the third iteration is 0.57 dB more than that of RS (255, 239) + CSOC ( k0/ n0 = 6/7, J = 8) code in ITU-T G.975.1 at the third iteration for the bit error rate (BER) of 10 -12. Therefore, the novel code type can better be used in long-haul, larger capacity and higher bit-rate optical communication systems. Furthermore, the design and implementation of the novel concatenated code type are also discussed.

  13. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools. PMID:25910273

  14. Toward a Code of Conduct for the Presidency

    ERIC Educational Resources Information Center

    Fleming, J. Christopher

    2012-01-01

    A presidential code of conduct is needed more today than ever before. College and university presidents are being required to do more without the proper training to succeed. Presidents from outside the academy enter academia with normative patterns and codes of conduct that served them well in their previous occupations but now have the potential…

  15. Coding as a Trojan Horse for Mathematics Education Reform

    ERIC Educational Resources Information Center

    Gadanidis, George

    2015-01-01

    The history of mathematics educational reform is replete with innovations taken up enthusiastically by early adopters without significant transfer to other classrooms. This paper explores the coupling of coding and mathematics education to create the possibility that coding may serve as a Trojan Horse for mathematics education reform. That is,…

  16. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  17. A novel unified coding analytical method for Internet of Things

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Zhang, JianHong

    2013-08-01

    This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.

  18. Error-correcting code on a cactus: A solvable model

    NASA Astrophysics Data System (ADS)

    Vicente, R.; Saad, D.; Kabashima, Y.

    2000-09-01

    An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.

  19. Coded aperture imaging with a HURA coded aperture and a discrete pixel detector

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    An investigation into the gamma ray imaging properties of a hexagonal uniformly redundant array (HURA) coded aperture and a detector consisting of discrete pixels constituted the major research effort. Such a system offers distinct advantages for the development of advanced gamma ray astronomical telescopes in terms of the provision of high quality sky images in conjunction with an imager plane which has the capacity to reject background noise efficiently. Much of the research was performed as part of the European Space Agency (ESA) sponsored study into a prospective space astronomy mission, GRASP. The effort involved both computer simulations and a series of laboratory test images. A detailed analysis of the system point spread function (SPSF) of imaging planes which incorporate discrete pixel arrays is presented and the imaging quality quantified in terms of the signal to noise ratio (SNR). Computer simulations of weak point sources in the presence of detector background noise were also investigated. Theories developed during the study were evaluated by a series of experimental measurements with a Co-57 gamma ray point source, an Anger camera detector, and a rotating HURA mask. These tests were complemented by computer simulations designed to reproduce, as close as possible, the experimental conditions. The 60 degree antisymmetry property of HURA's was also employed to remove noise due to detector systematic effects present in the experimental images, and rendered a more realistic comparison of the laboratory tests with the computer simulations. Plateau removal and weighted deconvolution techniques were also investigated as methods for the reduction of the coding error noise associated with the gamma ray images.

  20. Selective video encryption of a distributed coded bitstream using LDPC codes

    NASA Astrophysics Data System (ADS)

    Um, Hwayoung; Delp, Edward J.

    2006-02-01

    Selective encryption is a technique that is used to minimizec omputational complexity or enable system functionality by only encrypting a portion of a compressed bitstream while still achieving reasonable security. For selective encryption to work, we need to rely not only on the beneficial effects of redundancy reduction, but also on the characteristics of the compression algorithm to concentrate important data representing the source in a relatively small fraction of the compressed bitstream. These important elements of the compressed data become candidates for selective encryption. In this paper, we combine encryption and distributed video source coding to consider the choices of which types of bits are most effective for selective encryption of a video sequence that has been compressed using a distributed source coding method based on LDPC codes. Instead of encrypting the entire video stream bit by bit, we encrypt only the highly sensitive bits. By combining the compression and encryption tasks and thus reducing the number of bits encrypted, we can achieve a reduction in system complexity.

  1. A Simple Cooperative Relaying with Alamouti Coded Transmission

    NASA Astrophysics Data System (ADS)

    Yamaoka, Tomoya; Hara, Yoshitaka; Fukui, Noriyuki; Kubo, Hiroshi; Yamazato, Takaya

    Cooperative diversity using space-time codes offers effective space diversity with low complexity, but the scheme needs the space-time coding process in the relay nodes. We propose a simple cooperative relay scheme that uses space-time coding. In the scheme, the source node transmits the Alamouti coded signal sequences and the sink node receives the signal sequence via the two coordinated relay nodes. At the relay nodes, the operation procedure is just permutation and forwarding of the signal sequence. In the proposed scheme, none of the relay nodes need quadrature detection and space-time coding and the simple relay process offers effective space diversity. Moreover, simulations show the effectiveness of the proposed relay process by some simulations.

  2. A new two dimensional spectral/spatial multi-diagonal code for noncoherent optical code division multiple access (OCDMA) systems

    NASA Astrophysics Data System (ADS)

    Kadhim, Rasim Azeez; Fadhil, Hilal Adnan; Aljunid, S. A.; Razalli, Mohamad Shahrazel

    2014-10-01

    A new two dimensional codes family, namely two dimensional multi-diagonal (2D-MD) codes, is proposed for spectral/spatial non-coherent OCDMA systems based on the one dimensional MD code. Since the MD code has the property of zero cross correlation, the proposed 2D-MD code also has this property. So that, the multi-access interference (MAI) is fully eliminated and the phase induced intensity noise (PIIN) is suppressed with the proposed code. Code performance is analyzed in terms of bit error rate (BER) while considering the effect of shot noise, PIIN, and thermal noise. The performance of the proposed code is compared with the related MD, modified quadratic congruence (MQC), two dimensional perfect difference (2D-PD) and two dimensional diluted perfect difference (2D-DPD) codes. The analytical and the simulation results reveal that the proposed 2D-MD code outperforms the other codes. Moreover, a large number of simultaneous users can be accommodated at low BER and high data rate.

  3. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    NASA Technical Reports Server (NTRS)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  4. A New AMR Code for Relativistic Magnetohydrodynamics in Dynamical Specetimes: Numerical Method and Code Validation

    NASA Astrophysics Data System (ADS)

    Liu, Yuk Tung; Etienne, Zachariah; Shapiro, Stuart

    2011-04-01

    The Illinois relativity group has written and tested a new GRMHD code, which is compatible with adaptive-mesh refinement (AMR) provided by the widely-used Cactus/Carpet infrastructure. Our code solves the Einstein-Maxwell-MHD system of coupled equations in full 3+1 dimensions, evolving the metric via the BSSN formalism and the MHD and magnetic induction equations via a conservative, high-resolution shock-capturing scheme. The induction equations are recast as an evolution equation for the magnetic vector potential. The divergenceless constraint div(B) = 0 is enforced by the curl of the vector potential. In simulations with uniform grid spacing, our MHD scheme is numerically equivalent to a commonly used, staggered-mesh constrained-transport scheme. We will present numerical method and code validation tests for both Minkowski and curved spacetimes. The tests include magnetized shocks, nonlinear Alfven waves, cylindrical explosions, cylindrical rotating disks, magnetized Bondi tests, and the collapse of a magnetized rotating star. Some of the more stringent tests involve black holes. We find good agreement between analytic and numerical solutions in these tests, and achieve convergence at the expected order.

  5. Electric utility value determination for wind energy. Volume II. A user's guide. [WTP code; WEIBUL code; ROSEN code; ULMOD code; FINAM code

    SciTech Connect

    Percival, David; Harper, James

    1981-02-01

    This report describes a method for determining the value of wind energy systems to electric utilities. It is performed by a package of computer models available from SERI that can be used with most utility planning models. The final output of these models gives a financial value ($/kW) of the wind energy system under consideration in the specific utility system. This volume, the second of two volumes, is a user's guide for the computer programs available from SERI. The first volume describes the value determination methodology and gives detailed discussion on each step of the computer modeling.

  6. A Two-Dimensional Compressible Gas Flow Code

    Energy Science and Technology Software Center (ESTSC)

    1995-03-17

    F2D is a general purpose, two dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-correction solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicabilitymore » of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.« less

  7. Implementation of a Blowing Boundary Condition in the LAURA Code

    NASA Technical Reports Server (NTRS)

    Thompson, Richard a.; Gnoffo, Peter A.

    2008-01-01

    Preliminary steps toward modeling a coupled ablation problem using a finite-volume Navier-Stokes code (LAURA) are presented in this paper. Implementation of a surface boundary condition with mass transfer (blowing) is described followed by verification and validation through comparisons with analytic results and experimental data. Application of the code to a carbon-nosetip ablation problem is demonstrated and the results are compared with previously published data. It is concluded that the code and coupled procedure are suitable to support further ablation analyses and studies.

  8. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  9. The Numerical Electromagnetics Code (NEC) - A Brief History

    SciTech Connect

    Burke, G J; Miller, E K; Poggio, A J

    2004-01-20

    The Numerical Electromagnetics Code, NEC as it is commonly known, continues to be one of the more widely used antenna modeling codes in existence. With several versions in use that reflect different levels of capability and availability, there are now 450 copies of NEC4 and 250 copies of NEC3 that have been distributed by Lawrence Livermore National Laboratory to a limited class of qualified recipients, and several hundred copies of NEC2 that had a recorded distribution by LLNL. These numbers do not account for numerous copies (perhaps 1000s) that were acquired through other means capitalizing on the open source code, the absence of distribution controls prior to NEC3 and the availability of versions on the Internet. In this paper we briefly review the history of the code that is concisely displayed in Figure 1. We will show how it capitalized on the research of prominent contributors in the early days of computational electromagnetics, how a combination of events led to the tri-service-supported code development program that ultimately led to NEC and how it evolved to the present day product. The authors apologize that space limitations do not allow us to provide a list of references or to acknowledge the numerous contributors to the code both of which can be found in the code documents.

  10. SEQassembly: A Practical Tools Program for Coding Sequences Splicing

    NASA Astrophysics Data System (ADS)

    Lee, Hongbin; Yang, Hang; Fu, Lei; Qin, Long; Li, Huili; He, Feng; Wang, Bo; Wu, Xiaoming

    CDS (Coding Sequences) is a portion of mRNA sequences, which are composed by a number of exon sequence segments. The construction of CDS sequence is important for profound genetic analysis such as genotyping. A program in MATLAB environment is presented, which can process batch of samples sequences into code segments under the guide of reference exon models, and splice these code segments of same sample source into CDS according to the exon order in queue file. This program is useful in transcriptional polymorphism detection and gene function study.

  11. A Combinatorial Geometry Code System with Model Testing Routines.

    Energy Science and Technology Software Center (ESTSC)

    1982-10-08

    GIFT, Geometric Information For Targets code system, is used to mathematically describe the geometry of a three-dimensional vehicle such as a tank, truck, or helicopter. The geometric data generated is merged in vulnerability computer codes with the energy effects data of a selected @munition to simulate the probabilities of malfunction or destruction of components when it is attacked by the selected munition. GIFT options include those which graphically display the vehicle, those which check themore » correctness of the geometry data, those which compute physical characteristics of the vehicle, and those which generate the geometry data used by vulnerability codes.« less

  12. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  13. ALEPH2 - A general purpose Monte Carlo depletion code

    SciTech Connect

    Stankovskiy, A.; Van Den Eynde, G.; Baeten, P.; Trakas, C.; Demy, P. M.; Villatte, L.

    2012-07-01

    The Monte-Carlo burn-up code ALEPH is being developed at SCK-CEN since 2004. A previous version of the code implemented the coupling between the Monte Carlo transport (any version of MCNP or MCNPX) and the ' deterministic' depletion code ORIGEN-2.2 but had important deficiencies in nuclear data treatment and limitations inherent to ORIGEN-2.2. A new version of the code, ALEPH2, has several unique features making it outstanding among other depletion codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. The last generation general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII and JENDL-4) are fully implemented, including special purpose activation, spontaneous fission, fission product yield and radioactive decay data. The built-in depletion algorithm allows to eliminate the uncertainties associated with obtaining the time-dependent nuclide concentrations. A predictor-corrector mechanism, calculation of nuclear heating, calculation of decay heat, decay neutron sources are available as well. The validation of the code on the results of REBUS experimental program has been performed. The ALEPH2 has shown better agreement with measured data than other depletion codes. (authors)

  14. A preprocessor for FORTRAN source code produced by reduce

    NASA Astrophysics Data System (ADS)

    Kaneko, Toshiaki; Kawabata, Setsuya

    1989-09-01

    For Estimating total cross sections and various spectra for complicated processes in high energy physics, the most time consuming part is numerical integration over the phase volume. When a FORTRAN source code for the integrand is produced by REDUCE, often it is not only too long but is not enough reduced to be optimized by a FORTRAN compiler. A program package called SPROC has been developed to convert FORTRAN source code to a more optimized form and to divide the code into subroutines whose lengths are short enough for FORTRAN compilers. It can also generate a vectorizable code, which can achieve high efficiency of vector computers. The output is given in a suitable form for the numerical integration package BASES and its vector computer version VBASES. By this improvement the CPU-time for integration is shortened by a factor of about two on a scalar computer and of several times then on a vector computer.

  15. A Coding System for Analysing a Spoken Text Database.

    ERIC Educational Resources Information Center

    Cutting, Joan

    1994-01-01

    This paper describes a coding system devised to analyze conversations of graduate students in applied linguistics at Edinburgh University. The system was devised to test the hypothesis that as shared knowledge among conversation participants grows, the textual density of in-group members has more cues than that of strangers. The informal…

  16. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  17. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  18. Shot level parallelization of a seismic inversion code using PVM

    SciTech Connect

    Versteeg, R.J.; Gockenback, M.; Symes, W.W.; Kern, M.

    1994-12-31

    This paper presents experience with parallelization using PVM of DSO, a seismic inversion code developed in The Rice Inversion Project. It focuses on one aspect: trying to run efficiently on a cluster of 4 workstations. The authors use a coarse grain parallelism in which they dynamically distribute the shots over the available machines in the cluster. The modeling and migration of their code is parallelized very effectively by this strategy; they have reached a overall performance of 104 Mflops using a configuration of one manager with 3 workers, a speedup of 2.4 versus the serial version, which according to Amdahl`s law is optimal given the current design of their code. Further speedup is currently limited by the non parallelized part of their code optimization, linear algebra and i(o).

  19. Towards Realistic Implementations of a Majorana Surface Code.

    PubMed

    Landau, L A; Plugge, S; Sela, E; Altland, A; Albrecht, S M; Egger, R

    2016-02-01

    Surface codes have emerged as promising candidates for quantum information processing. Building on the previous idea to realize the physical qubits of such systems in terms of Majorana bound states supported by topological semiconductor nanowires, we show that the basic code operations, namely projective stabilizer measurements and qubit manipulations, can be implemented by conventional tunnel conductance probes and charge pumping via single-electron transistors, respectively. The simplicity of the access scheme suggests that a functional code might be in close experimental reach. PMID:26894694

  20. A three-dimensional magnetostatics computer code for insertion devices.

    PubMed

    Chubar, O; Elleaume, P; Chavanne, J

    1998-05-01

    RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented. PMID:15263552

  1. POPCORN: A comparison of binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  2. A code for calculating intrabeam scattering and beam lifetime

    SciTech Connect

    Kim, C.H.

    1997-05-01

    Beam emittances in a circular accelerator with a high beam intensity are strongly affected by the small angle intrabeam Coulomb scattering. In the computer simulation model the authors present here they used three coupled nonlinear differential equations to describe the evolution of the emittances in the transverse and the longitudinal planes. These equations include terms which take into account the intra-beam scattering, adiabatic damping, microwave instabilities, synchrotron damping, and quantum excitations. A code is generated to solve the equations numerically and incorporated into a FORTRAN code library. Circular high intensity physics routines are included in the library such as intrabeam scattering, Touschek scattering, and the bunch lengthening effect of higher harmonic cavities. The code runs presently in the PC environment. Description of the code and some examples are presented.

  3. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  4. Progress towards a world-wide code of conduct

    SciTech Connect

    Lee, J.A.N.; Berleur, J.

    1994-12-31

    In this paper the work of the International Federation for Information Processing (IFIP) Task Group on Ethics is described and the recommendations presented to the General Assembly are reviewed. While a common code of ethics or conduct has been not recommended for consideration by the member societies of IMP, a set of guidelines for the establishment and evaluation of codes has been produced and procedures for the assistance of code development have been established within IMP. This paper proposes that the data collected by the Task Group and the proposed guidelines can be used as a tool for the study of codes of practice providing a teachable, learnable educational module in courses related to the ethics of computing and computation, and looks at the next steps in bringing ethical awareness to the IT community.

  5. A finite element code for electric motor design

    NASA Technical Reports Server (NTRS)

    Campbell, C. Warren

    1994-01-01

    FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.

  6. The Nuremberg Code and the Nuremberg Trial. A reappraisal.

    PubMed

    Katz, J

    1996-11-27

    The Nuremberg Code includes 10 principles to guide physician-investigators in experiments involving human subjects. These principles, particularly the first principle on "voluntary consent," primarily were based on legal concepts because medical codes of ethics existent at the time of the Nazi atrocities did not address consent and other safeguards for human subjects. The US judges who presided over the proceedings did not intend the Code to apply only to the case before them, to be a response to the atrocities committed by the Nazi physicians, or to be inapplicable to research as it is customarily carried on in medical institutions. Instead, a careful reading of the judgment suggests that they wrote the Code for the practice of human experimentation whenever it is being conducted. PMID:8922453

  7. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  8. A parallel and modular deformable cell Car-Parrinello code

    NASA Astrophysics Data System (ADS)

    Cavazzoni, Carlo; Chiarotti, Guido L.

    1999-12-01

    We have developed a modular parallel code implementing the Car-Parrinello [Phys. Rev. Lett. 55 (1985) 2471] algorithm including the variable cell dynamics [Europhys. Lett. 36 (1994) 345; J. Phys. Chem. Solids 56 (1995) 510]. Our code is written in Fortran 90, and makes use of some new programming concepts like encapsulation, data abstraction and data hiding. The code has a multi-layer hierarchical structure with tree like dependences among modules. The modules include not only the variables but also the methods acting on them, in an object oriented fashion. The modular structure allows easier code maintenance, develop and debugging procedures, and is suitable for a developer team. The layer structure permits high portability. The code displays an almost linear speed-up in a wide range of number of processors independently of the architecture. Super-linear speed up is obtained with a "smart" Fast Fourier Transform (FFT) that uses the available memory on the single node (increasing for a fixed problem with the number of processing elements) as temporary buffer to store wave function transforms. This code has been used to simulate water and ammonia at giant planet conditions for systems as large as 64 molecules for ˜50 ps.

  9. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-04-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  10. OncodriveFML: a general framework to identify coding and non-coding regions with cancer driver mutations.

    PubMed

    Mularoni, Loris; Sabarinathan, Radhakrishnan; Deu-Pons, Jordi; Gonzalez-Perez, Abel; López-Bigas, Núria

    2016-01-01

    Distinguishing the driver mutations from somatic mutations in a tumor genome is one of the major challenges of cancer research. This challenge is more acute and far from solved for non-coding mutations. Here we present OncodriveFML, a method designed to analyze the pattern of somatic mutations across tumors in both coding and non-coding genomic regions to identify signals of positive selection, and therefore, their involvement in tumorigenesis. We describe the method and illustrate its usefulness to identify protein-coding genes, promoters, untranslated regions, intronic splice regions, and lncRNAs-containing driver mutations in several malignancies. PMID:27311963

  11. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  12. ADLIB—A simple database framework for beamline codes

    NASA Astrophysics Data System (ADS)

    Mottershead, C. Thomas

    1993-12-01

    There are many well developed codes available for beamline design and analysis. A significant fraction of each of these codes is devoted to processing its own unique input language for describing the problem. None of these large, complex, and powerful codes does everything. Adding a new bit of specialized physics can be a difficult task whose successful completion makes the code even larger and more complex. This paper describes an attempt to move in the opposite direction, toward a family of small, simple, single purpose physics and utility modules, linked by an open, portable, public domain database framework. These small specialized physics codes begin with the beamline parameters already loaded in the database, and accessible via the handful of subroutines that constitute ADLIB. Such codes are easier to write, and inherently organized in a manner suitable for incorporation in model based control system algorithms. Examples include programs for analyzing beamline misalignment sensitivities, for simulating and fitting beam steering data, and for translating among MARYLIE, TRANSPORT, and TRACE3D formats.

  13. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  14. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1993-01-01

    Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.

  15. A Comprehensive Validation Approach Using The RAVEN Code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J; Rinaldi, Ivan; Giannetti, Fabio; Caruso, Gianfranco

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics needed to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.

  16. ICD-10 mortality coding and the NCIS: a comparative study.

    PubMed

    Daking, Leanne; Dodds, Leonie

    2007-01-01

    The collection and utilisation of mortality data are often hindered by limited access to contextual details of the circumstances surrounding fatal incidents. The National Coroners Information System (NCIS) can provide researchers with access to such information. The NCIS search capabilities have been enhanced by the inclusion of data supplied by the Australian Bureau of Statistics (ABS), specifically the ICD-10 Cause of Death code set. A comparative study was conducted to identify consistencies and differences between ABS ICD-10 codes and those that could be generated by utilising the full NCIS record. Discrepancies between the two sets of codes were detected in over 50% of cases, which highlighted the importance of access to complete and timely documentation in the assignment of accurate and detailed cause of death codes. PMID:18195402

  17. Codes, standards, and PV power systems. A 1996 status report

    SciTech Connect

    Wiles, J

    1996-06-01

    As photovoltaic (PV) electrical power systems gain increasing acceptance for both off-grid and utility-interactive applications, the safety, durability, and performance of these systems gains in importance. Local and state jurisdictions in many areas of the country require that all electrical power systems be installed in compliance with the requirements of the National Electrical Code{reg_sign} (NEC{reg_sign}). Utilities and governmental agencies are now requiring that PV installations and components also meet a number of Institute of Electrical and Electronic Engineers (IEEE) standards. PV installers are working more closely with licensed electricians and electrical contractors who are familiar with existing local codes and installation practices. PV manufacturers, utilities, balance of systems manufacturers, and standards representatives have come together to address safety and code related issues for future PV installations. This paper addresses why compliance with the accepted codes and standards is needed and how it is being achieved.

  18. A comprehensive catalogue of the coding and non-coding transcripts of the human inner ear.

    PubMed

    Schrauwen, Isabelle; Hasin-Brumshtein, Yehudit; Corneveaux, Jason J; Ohmen, Jeffrey; White, Cory; Allen, April N; Lusis, Aldons J; Van Camp, Guy; Huentelman, Matthew J; Friedman, Rick A

    2016-03-01

    The mammalian inner ear consists of the cochlea and the vestibular labyrinth (utricle, saccule, and semicircular canals), which participate in both hearing and balance. Proper development and life-long function of these structures involves a highly complex coordinated system of spatial and temporal gene expression. The characterization of the inner ear transcriptome is likely important for the functional study of auditory and vestibular components, yet, primarily due to tissue unavailability, detailed expression catalogues of the human inner ear remain largely incomplete. We report here, for the first time, comprehensive transcriptome characterization of the adult human cochlea, ampulla, saccule and utricle of the vestibule obtained from patients without hearing abnormalities. Using RNA-Seq, we measured the expression of >50,000 predicted genes corresponding to approximately 200,000 transcripts, in the adult inner ear and compared it to 32 other human tissues. First, we identified genes preferentially expressed in the inner ear, and unique either to the vestibule or cochlea. Next, we examined expression levels of specific groups of potentially interesting RNAs, such as genes implicated in hearing loss, long non-coding RNAs, pseudogenes and transcripts subject to nonsense mediated decay (NMD). We uncover the spatial specificity of expression of these RNAs in the hearing/balance system, and reveal evidence of tissue specific NMD. Lastly, we investigated the non-syndromic deafness loci to which no gene has been mapped, and narrow the list of potential candidates for each locus. These data represent the first high-resolution transcriptome catalogue of the adult human inner ear. A comprehensive identification of coding and non-coding RNAs in the inner ear will enable pathways of auditory and vestibular function to be further defined in the study of hearing and balance. Expression data are freely accessible at https://www.tgen.org/home/research

  19. Estimation of ultrasonic attenuation in a bone using coded excitation.

    PubMed

    Nowicki, A; Litniewski, J; Secomski, W; Lewin, P A; Trots, I

    2003-11-01

    This paper describes a novel approach to estimate broadband ultrasound attenuation (BUA) in a bone structure in human in vivo using coded excitation. BUA is an accepted indicator for assessment of osteoporosis. In the tested approach a coded acoustic signal is emitted and then the received echoes are compressed into brief, high amplitude pulses making use of matched filters and correlation receivers. In this way the acoustic peak pressure amplitude probing the tissue can be markedly decreased whereas the average transmitted intensity increases proportionally to the length of the code. This paper examines the properties of three different transmission schemes, based on Barker code, chirp and Golay code. The system designed is capable of generating 16 bits complementary Golay code (CGC), linear frequency modulated (LFM) chirp and 13-bit Barker code (BC) at 0.5 and 1 MHz center frequencies. Both in vivo data acquired from healthy heel bones and in vitro data obtained from human calcaneus were examined and the comparison between the results using coded excitation and two cycles sine burst is presented. It is shown that CGC system allows the effective range of frequencies employed in the measurement of broadband acoustic energy attenuation in the trabecular bone to be doubled in comparison to the standard 0.5 MHz pulse transmission. The algorithm used to calculate the pairs of Golay sequences of the different length, which provide the temporal side-lobe cancellation is also presented. Current efforts are focused on adapting the system developed for operation in pulse-echo mode; this would allow examination and diagnosis of bones with limited access such as hip bone. PMID:14585473

  20. HADES, A Code for Simulating a Variety of Radiographic Techniques

    SciTech Connect

    Aufderheide, M B; Henderson, G; von Wittenau, A; Slone, D M; Barty, A; Martz, Jr., H E

    2004-10-28

    It is often useful to simulate radiographic images in order to optimize imaging trade-offs and to test tomographic techniques. HADES is a code that simulates radiography using ray tracing techniques. Although originally developed to simulate X-Ray transmission radiography, HADES has grown to simulate neutron radiography over a wide range of energy, proton radiography in the 1 MeV to 100 GeV range, and recently phase contrast radiography using X-Rays in the keV energy range. HADES can simulate parallel-ray or cone-beam radiography through a variety of mesh types, as well as through collections of geometric objects. HADES was originally developed for nondestructive evaluation (NDE) applications, but could be a useful tool for simulation of portal imaging, proton therapy imaging, and synchrotron studies of tissue. In this paper we describe HADES' current capabilities and discuss plans for a major revision of the code.

  1. Programming a real code in a functional language (part 1)

    SciTech Connect

    Hendrickson, C.P.

    1991-09-10

    For some, functional languages hold the promise of allowing ease of programming massively parallel computers that imperative languages such as Fortran and C do not offer. At LLNL, we have initiated a project to write the physics of a major production code in Sisal, a functional language developed at LLNL in collaboration with researchers throughout the world. We are investigating the expressibility of Sisal, as well as its performance on a shared-memory multiprocessor, the Y-MP. An interesting aspect of the project is that Sisal modules can call Fortran modules, and are callable by them. This eliminates the rewriting of 80% of the production code that would not benefit from parallel execution. Preliminary results indicate that the restrictive nature of the language does not cause problems in expressing the algorithms we have chosen. Some interesting aspects of programming in a mixed functional-imperative environment have surfaced, but can be managed. 8 refs.

  2. Radiation transport phenomena and modeling - part A: Codes

    SciTech Connect

    Lorence, L.J.

    1997-06-01

    The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped.

  3. A need for a code of ethics in science communication?

    NASA Astrophysics Data System (ADS)

    Benestad, R. E.

    2009-09-01

    The modern western civilization and high standard of living are to a large extent the 'fruits' of scientific endeavor over generations. Some examples include the longer life expectancy due to progress in medical sciences, and changes in infrastructure associated with the utilization of electromagnetism. Modern meteorology is not possible without the state-of-the-art digital computers, satellites, remote sensing, and communications. Science also is of relevance for policy making, e.g. the present hot topic of climate change. Climate scientists have recently become much exposed to media focus and mass communications, a task for which many are not trained. Furthermore, science, communication, and politics have different objectives, and do not necessarily mix. Scientists have an obligation to provide unbiased information, and a code of ethics is needed to give a guidance for acceptable and unacceptable conduct. Some examples of questionable conduct in Norway include using the title 'Ph.D' to imply scientific authority when the person never had obtained such an academic degree, or writing biased and one-sided articles in Norwegian encyclopedia that do not reflect the scientific consensus. It is proposed here that a set of guide lines (for the scientists and journalists) and a code of conduct could provide recommendation for regarding how to act in media - similar to a code of conduct with respect to carrying out research - to which everyone could agree, even when disagreeing on specific scientific questions.

  4. A systems neurophysiology approach to voluntary event coding.

    PubMed

    Petruo, Vanessa A; Stock, Ann-Kathrin; Münchau, Alexander; Beste, Christian

    2016-07-15

    Mechanisms responsible for the integration of perceptual events and appropriate actions (sensorimotor processes) have been subject to intense research. Different theoretical frameworks have been put forward with the "Theory of Event Coding (TEC)" being one of the most influential. In the current study, we focus on the concept of 'event files' within TEC and examine what sub-processes being dissociable by means of cognitive-neurophysiological methods are involved in voluntary event coding. This was combined with EEG source localization. We also introduce reward manipulations to delineate the neurophysiological sub-processes most relevant for performance variations during event coding. The results show that processes involved in voluntary event coding included predominantly stimulus categorization, feature unbinding and response selection, which were reflected by distinct neurophysiological processes (the P1, N2 and P3 ERPs). On a system's neurophysiological level, voluntary event-file coding is thus related to widely distributed parietal-medial frontal networks. Attentional selection processes (N1 ERP) turned out to be less important. Reward modulated stimulus categorization in parietal regions likely reflecting aspects of perceptual decision making but not in other processes. The perceptual categorization stage appears central for voluntary event-file coding. PMID:27153981

  5. A novel 2D wavelength-time chaos code in optical CDMA system

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Xin, Xiangjun; Wang, Yongjun; Zhang, Lijia; Yu, Chongxiu; Meng, Nan; Wang, Houtian

    2012-11-01

    Two-dimensional wavelength-time chaos code is proposed and constructed for a synchronous optical code division multiple access system. The access performance is compared between one-dimensional chaos code, WDM/chaos code and the proposed code. Comparison shows that two-dimensional wavelength-time chaos code possesses larger capacity, better spectral efficiency and bit-error ratio than WDM/chaos combinations and one-dimensional chaos code.

  6. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than...

  7. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than...

  8. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than...

  9. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  10. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  11. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  12. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  13. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  14. A parallel TreeSPH code for galaxy formation

    NASA Astrophysics Data System (ADS)

    Lia, Cesario; Carraro, Giovanni

    2000-05-01

    We describe a new implementation of a parallel TreeSPH code with the aim of simulating galaxy formation and evolution. The code has been parallelized using shmem, a Cray proprietary library to handle communications between the 256 processors of the Silicon Graphics T3E massively parallel supercomputer hosted by the Cineca Super-computing Center (Bologna, Italy).1 The code combines the smoothed particle hydrodynamics (SPH) method for solving hydrodynamical equations with the popular Barnes & Hut tree-code to perform gravity calculation with an N×logN scaling, and it is based on the scalar TreeSPH code developed by Carraro et al. Parallelization is achieved by distributing particles along processors according to a workload criterion. Benchmarks, in terms of load balance and scalability, of the code are analysed and critically discussed against the adiabatic collapse of an isothermal gas sphere test using 2×104 particles on 8 processors. The code results balance at more than the 95per cent level. Increasing the number of processors, the load balance slightly worsens. The deviation from perfect scalability for increasing number of processors is almost negligible up to 32 processors. Finally, we present a simulation of the formation of an X-ray galaxy cluster in a flat cold dark matter cosmology, using 2×105 particles and 32 processors, and compare our results with Evrard's P3M-SPH simulations. Additionally we have incorporated radiative cooling, star formation, feedback from SNe of types II and Ia, stellar winds and UV flux from massive stars, and an algorithm to follow the chemical enrichment of the interstellar medium. Simulations with some of these ingredients are also presented.

  15. LUDWIG: A parallel Lattice-Boltzmann code for complex fluids

    NASA Astrophysics Data System (ADS)

    Desplat, Jean-Christophe; Pagonabarraga, Ignacio; Bladon, Peter

    2001-03-01

    This paper describes Ludwig, a versatile code for the simulation of Lattice-Boltzmann (LB) models in 3D on cubic lattices. In fact, Ludwig is not a single code, but a set of codes that share certain common routines, such as I/O and communications. If Ludwig is used as intended, a variety of complex fluid models with different equilibrium free energies are simple to code, so that the user may concentrate on the physics of the problem, rather than on parallel computing issues. Thus far, Ludwig's main application has been to symmetric binary fluid mixtures. We first explain the philosophy and structure of Ludwig which is argued to be a very effective way of developing large codes for academic consortia. Next we elaborate on some parallel implementation issues such as parallel I/O, and the use of MPI to achieve full portability and good efficiency on both MPP and SMP systems. Finally, we describe how to implement generic solid boundaries, and look in detail at the particular case of a symmetric binary fluid mixture near a solid wall. We present a novel scheme for the thermodynamically consistent simulation of wetting phenomena, in the presence of static and moving solid boundaries, and check its performance.

  16. Bounds of the bit error probability of a linear cyclic code over GF(2 exp l) and its extended code

    NASA Technical Reports Server (NTRS)

    Cheng, Unjeng; Huth, Gaylord K.

    1988-01-01

    An upper bound on the bit-error probability (BEP) of a linear cyclic code over GF(2 exp l) with hard-decision (HD) maximum-likelihood (ML) decoding on memoryless symmetric channels is derived. Performance results are presented for Reed-Solomon codes on GF(32), GF(64), and GF(128). Also, a union upper bound on the BEP of a linear cyclic code with either HD or soft-decision (SD) ML decoding is developed, as well as the corresponding bounds for the extended code of a linear cyclic code. Using these bounds, which are tight at low bit error rate, the performance advantage of SD and HD ML over bounded-distance decoding is established.

  17. Experimental qualification of a code for optimizing gamma irradiation facilities

    NASA Astrophysics Data System (ADS)

    Mosse, D. C.; Leizier, J. J. M.; Keraron, Y.; Lallemant, T. F.; Perdriau, P. D. M.

    Dose computation codes are a prerequisite for the design of gamma irradiation facilities. Code quality is a basic factor in the achievement of sound economic and technical performance by the facility. This paper covers the validation of a code by reference dosimetry experiments. Developed by the "Société Générale pour les Techniques Nouvelles" (SGN), a supplier of irradiation facilities and member of the CEA Group, the code is currently used by that company. (ERHART, KERARON, 1986) Experimental data were obtained under conditions representative of those prevailing in the gamma irradiation of foodstuffs. Irradiation was performed in POSEIDON, a Cobalt 60 cell of ORIS-I. Several Cobalt 60 rods of known activity are arranged in a planar array typical of industrial irradiation facilities. Pallet density is uniform, ranging from 0 (air) to 0.6. Reference dosimetry measurements were performed by the "Laboratoire de Métrologie des Rayonnements Ionisants" (LMRI) of the "Bureau National de Métrologie" (BNM). The procedure is based on the positioning of more than 300 ESR/alanine dosemeters throughout the various target volumes used. The reference quantity was the absorbed dose in water. The code was validated by a comparison of experimental and computed data. It has proved to be an effective tool for the design of facilities meeting the specific requirements applicable to foodstuff irradiation, which are frequently found difficult to meet.

  18. A new balanced modulation code for a phase-image-based holographic data storage system

    NASA Astrophysics Data System (ADS)

    John, Renu; Joseph, Joby; Singh, Kehar

    2005-08-01

    We propose a new balanced modulation code for coding data pages for phase-image-based holographic data storage systems. The new code addresses the coding subtleties associated with phase-based systems while performing a content-based search in a holographic database. The new code, which is a balanced modulation code, is a modification of the existing 8:12 modulation code, and removes the false hits that occur in phase-based content-addressable systems due to phase-pixel subtractions. We demonstrate the better performance of the new code using simulations and experiments in terms of discrimination ratio while content addressing through a holographic memory. The new code is compared with the conventional coding scheme to analyse the false hits due to subtraction of phase pixels.

  19. A colorful origin for the genetic code: information theory, statistical mechanics and the emergence of molecular codes.

    PubMed

    Tlusty, Tsvi

    2010-09-01

    The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. While the biochemical details of this code were unraveled long ago, its origin is still obscure. We review information-theoretic approaches to the problem of the code's origin and discuss the results of a recent work that treats the code in terms of an evolving, error-prone information channel. Our model - which utilizes the rate-distortion theory of noisy communication channels - suggests that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino-acids, for error-tolerance and for minimal cost of resources. The description of the code as an information channel allows us to mathematically identify the fitness of the code and locate its emergence at a second-order phase transition when the mapping of codons to amino-acids becomes nonrandom. The noise in the channel brings about an error-graph, in which edges connect codons that are likely to be confused. The emergence of the code is governed by the topology of the error-graph, which determines the lowest modes of the graph-Laplacian and is related to the map coloring problem. PMID:20558115

  20. Performance analysis of a multilevel coded modulation system

    NASA Astrophysics Data System (ADS)

    Kofman, Yosef; Zehavi, Ephraim; Shamai, Shlomo

    1994-02-01

    A modified version of the multilevel coded modulation scheme of Imai & Hirakawa is presented and analyzed. In the transmitter, the outputs of the component codes are bit interleaved prior to mapping into 8-PSK channel signals. A multistage receiver is considered, in which the output amplitudes of the Gaussian channel are soft limited before entering the second and third stage decoders. Upper bounds and Gaussian approximations for the bit error probability of every component code, which take into account errors in previously decoded stages, are presented. Aided by a comprehensive computer simulation, it is demonstrated in a specific example that the addition of the interleaver and soft limiter in the third stage improves its performance by 1.1 dB at a bit error probability of 10(exp -5), and that the multilevel scheme improves on an Ungerboeck's code with the same decoding complexity. The rate selection of the component codes is also considered and a simple selection rule, based on information theoretic arguments, is provided.

  1. Organizing conceptual knowledge in humans with a gridlike code.

    PubMed

    Constantinescu, Alexandra O; O'Reilly, Jill X; Behrens, Timothy E J

    2016-06-17

    It has been hypothesized that the brain organizes concepts into a mental map, allowing conceptual relationships to be navigated in a manner similar to that of space. Grid cells use a hexagonally symmetric code to organize spatial representations and are the likely source of a precise hexagonal symmetry in the functional magnetic resonance imaging signal. Humans navigating conceptual two-dimensional knowledge showed the same hexagonal signal in a set of brain regions markedly similar to those activated during spatial navigation. This gridlike signal is consistent across sessions acquired within an hour and more than a week apart. Our findings suggest that global relational codes may be used to organize nonspatial conceptual representations and that these codes may have a hexagonal gridlike pattern when conceptual knowledge is laid out in two continuous dimensions. PMID:27313047

  2. A Data Parallel Multizone Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1995-01-01

    We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.

  3. The performance of a sequential acquisition system for PN codes

    NASA Astrophysics Data System (ADS)

    Kerr, R. W.; Arakaki, E. M.; Huang, M. Y.

    Direct sequence spread spectrum techniques are being applied in an increasing number of advanced communication systems where anti-jam (AJ), low probability of intercept (LPI), or code division multiple access (CDMA) capabilities are required. In all these systems, rapid acquisition of long PN code is a system necessity. Generally, acquisition of long PN codes is accomplished by correlation measurements of the incoming sequence with a locally generated code sequence. However, instead of utilizing fixed integration times, a sequential acquisition technique could also be used for active correlation, which results in greatly reduced acquisition times. TRW has designed and completed a limited production of 33 spread spectrum receivers for use with the NASA Tracking Data Relay Satellite System (TDRSS). The receivers provide multiple access and ranging capability while simultaneously decreasing the transmitted power flux density to meet CCIR restrictions. This paper presents the analysis, hardware description, and performance of the sequential code acquisition system implemented on these receivers. A unique noise calibration process, which holds the key to successful operation of these receivers, is described in detail.

  4. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  5. European coding system for tissues and cells: a challenge unmet?

    PubMed

    Reynolds, Melvin; Warwick, Ruth M; Poniatowski, Stefan; Trias, Esteve

    2010-11-01

    The Comité Européen de Normalisation (European Committee for Standardization, CEN) Workshop on Coding of Information and Traceability of Human Tissues and Cells was established by the Expert Working Group of the Directorate General for Health and Consumer Affairs of the European Commission (DG SANCO) to identify requirements concerning the coding of information and the traceability of human tissues and cells, and propose guidelines and recommendations to permit the implementation of the European Coding system required by the European Tissues and Cells Directive 2004/23/EC (ED). The Workshop included over 70 voluntary participants from tissue, blood and eye banks, national ministries for healthcare, transplant organisations, universities and coding organisations; mainly from Europe with a small number of representatives from professionals in Canada, Australia, USA and Japan. The Workshop commenced in April 2007 and held its final meeting in February 2008. The draft Workshop Agreement went through a public comment phase from 15 December 2007 until 15 January 2008 and the endorsement period ran from 9 April 2008 until 2 May 2008. The endorsed CEN Workshop Agreement (CWA) set out the issues regarding a common coding system, qualitatively assessed what the industry felt was required of a coding system, reviewed coding systems that were put forward as potential European coding systems and established a basic specification for a proposed European coding system for human tissues and cells, based on ISBT 128, and which is compatible with existing systems of donation identification, traceability and nomenclatures, indicating how implementation of that system could be approached. The CWA, and the associated Workshop proposals with recommendations, were finally submitted to the European Commission and to the Committee of Member States that assists its management process under article 29 of the Directive 2004/23/EC on May 25 2008. In 2009 the European Commission initiated an

  6. Overview of WARP, a particle code for Heavy Ion Fusion

    SciTech Connect

    Friedman, A.; Grote, D.P.; Callahan, D.A.; Langdon, A.B.; Haber, I.

    1993-02-22

    The beams in a Heavy Ion beam driven inertial Fusion (HIF) accelerator must be focused onto small spots at the fusion target, and so preservation of beam quality is crucial. The nonlinear self-fields of these space-charge-dominated beams can lead to emittance growth; thus a self-consistent field description is necessary. We have developed a multi-dimensional discrete-particle simulation code, WARP, and are using it to study the behavior of HIF beams. The code`s 3d package combines features of an accelerator code and a particle-in-cell plasma simulation, and can efficiently track beams through many lattice elements and around bends. We have used the code to understand the physics of aggressive drift-compression in the MBE-4 experiment at Lawrence Berkeley Laboratory (LBL). We have applied it to LBL`s planned ILSE experiments, to various ``recirculator`` configurations, and to the study of equilibria and equilibration processes. Applications of the 3d package to ESQ injectors, and of the r, z package to longitudinal stability in driver beams, are discussed in related papers.

  7. Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.

    2000-01-01

    The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.

  8. Synergy from Silence in a Combinatorial Neural Code

    PubMed Central

    Schneidman, Elad; Puchalla, Jason L.; Segev, Ronen; Harris, Robert A.; Bialek, William; Berry, Michael J.

    2011-01-01

    The manner in which groups of neurons represent events in the external world is a central question in neuroscience. Estimation of the information encoded by small groups of neurons has shown that in many neural systems, cells carry mildly redundant information. These measures average over all the activity patterns of a neural population. Here, we analyze the population code of the salamander and guinea pig retinas by quantifying the information conveyed by specific multi-cell activity patterns. Synchronous spikes, even though they are relatively rare and highly informative, convey less information than the sum of either spike alone, making them redundant coding symbols. Instead, patterns of spiking in one cell and silence in others, which are relatively common and often overlooked as special coding symbols, were found to be mostly synergistic. Our results reflect that the mild average redundancy between ganglion cells that was previously reported is actually the result of redundant and synergistic multi-cell patterns, whose contributions partially cancel each other when taking the average over all patterns. We further show that similar coding properties emerge in a generic model of neural responses, suggesting that this form of combinatorial coding, in which specific compound patterns carry synergistic or redundant information, may exist in other neural circuits. PMID:22049416

  9. Parallel Processing of a Groundwater Contaminant Code

    SciTech Connect

    Arnett, Ronald Chester; Greenwade, Lance Eric

    2000-05-01

    The U. S. Department of Energy’s Idaho National Engineering and Environmental Laboratory (INEEL) is conducting a field test of experimental enhanced bioremediation of trichoroethylene (TCE) contaminated groundwater. TCE is a chlorinated organic substance that was used as a solvent in the early years of the INEEL and disposed in some cases to the aquifer. There is an effort underway to enhance the natural bioremediation of TCE by adding a non-toxic substance that serves as a feed material for the bacteria that can biologically degrade the TCE.

  10. Incorporation of Condensation Heat Transfer in a Flow Network Code

    NASA Technical Reports Server (NTRS)

    Anthony, Miranda; Majumdar, Alok; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    In this paper we have investigated the condensation of water vapor in a short tube. A numerical model of condensation heat transfer was incorporated in a flow network code. The flow network code that we have used in this paper is Generalized Fluid System Simulation Program (GFSSP). GFSSP is a finite volume based flow network code. Four different condensation models were presented in the paper. Soliman's correlation has been found to be the most stable in low flow rates which is of particular interest in this application. Another highlight of this investigation is conjugate or coupled heat transfer between solid or fluid. This work was done in support of NASA's International Space Station program.

  11. Colour coding scrubs as a means of improving perioperative communication.

    PubMed

    Litak, Dominika

    2011-05-01

    Effective communication within the operating department is essential for achieving patient safety. A large part of the perioperative communication is non-verbal. One type of non-verbal communication is 'object communication', the most common form of which is clothing. The colour coding of clothing such as scrubs has the potential to optimise perioperative communication with the patients and between the staff. A colour contains a coded message, and is a visual cue for an immediate identification of personnel. This is of key importance in the perioperative environment. The idea of colour coded scrubs in the perioperative setting has not been much explored to date and, given the potential contributiontowards improvement of patient outcomes, deserves consideration. PMID:21834289

  12. A Coach's Code of Conduct. Position Statement

    ERIC Educational Resources Information Center

    Lyman, Linda; Ewing, Marty; Martino, Nan

    2009-01-01

    Coaches exert a profound impact on our youths; therefore, society sets high expectations for them. As such, whether coaches are compensated or work solely as volunteers, they are responsible for executing coaching as a professional. If we are to continue to enhance the cultural perceptions of coaching, we must strive to develop and master the…

  13. A neural coding scheme reproducing foraging trajectories.

    PubMed

    Gutiérrez, Esther D; Cabrera, Juan Luis

    2015-01-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series. PMID:26648311

  14. Imaging The Genetic Code of a Virus

    NASA Astrophysics Data System (ADS)

    Graham, Jenna; Link, Justin

    2013-03-01

    Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University

  15. A neural coding scheme reproducing foraging trajectories

    PubMed Central

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-01-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series. PMID:26648311

  16. A neural coding scheme reproducing foraging trajectories

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-12-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series.

  17. SCAMPI: A code package for cross-section processing

    SciTech Connect

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  18. Image Coding By Vector Quantization In A Transformed Domain

    NASA Astrophysics Data System (ADS)

    Labit, C.; Marescq, J. P...

    1986-05-01

    Using vector quantization in a transformed domain, TV images are coded. The method exploit spatial redundancies of small 4x4 blocks of pixel : first, a DCT (or Hadamard) trans-form is performed on these blocks. A classification algorithm ranks them into visual and transform properties-based classes. For each class, high energy carrying coefficients are retained and using vector quantization, a codebook is built for the AC remaining part of the transformed blocks. The whole of the codeworks are referenced by an index. Each block is then coded by specifying its DC coefficient and associated index.

  19. Turbulence requirements of a commerical CFD code

    NASA Technical Reports Server (NTRS)

    Vandoormaal, J. P.; Mueller, C. M.; Raw, M. J.

    1995-01-01

    This viewgraph presentation gives a profile of Advanced Scientific Computing (ASC) Ltd., applications, clients and clients' needs, ASC's directions, and how the Center for Modeling of Turbulence and Transition (CMOTT) can help.

  20. DART: A simulation code for charged particle beams: Revision 1

    SciTech Connect

    White, R.C.; Barr, W.L.; Moir, R.W.

    1989-07-31

    This paper presents a recently modified version of the 2-D code, DART, which can simulate the behavior of a beam of charged particles whose trajectories are determined by electric and magnetic fields. This code was originally used to design laboratory-scale and full-scale beam direct converters. Since then, its utility has been expanded to allow more general applications. The simulation includes space charge, secondary electrons, and the ionization of neutral gas. A beam can contain up to nine superimposed beamlets of different energy and species. The calculation of energy conversion efficiency and the method of specifying the electrode geometry are described. Basic procedures for using the code are given, and sample input and output fields are shown. 7 refs., 18 figs.

  1. Parallelization of a three-dimensional compressible transition code

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Bokhari, Shahid H.

    1990-01-01

    The compressible, three-dimensional, time-dependent Navier-Stokes equations are solved on a 20 processor Flex/32 computer. The code is a parallel implementation of an existing code operational on the Cray-2 at NASA Ames, which performs direct simulations of the initial stages of the transition process of wall-bounded flow at supersonic Mach numbers. Spectral collocation in all three spatial directions (Fourier along the plate and Chebyshev normal to it) ensures high accuracy of the flow variables. By hiding most of the parallelism in low-level routines, the casual user is shielded from most of the nonstandard coding constructs. Speedups of 13 out of a maximum of 16 are achieved on the largest computational grids.

  2. Error correction coding for a meteor burst channel

    NASA Astrophysics Data System (ADS)

    Miller, Scott L.; Milstein, Laurence B.

    1990-09-01

    The time-varying-SNR model for the meteor burst (MB) channel is reviewed. Bounds on the capacity of the channel are derived for both a constant SNR model and a time-varying SNR model. These bounds show that there is a significant throughput improvement to be gained by using forward error correction. Two methods are given for determining the performance of an MB system when packets of information are encoded with an (n,k) linear block code. Numerical results are generated using high-rate BCH codes, and it is found that about 25 percent improvement over uncoded systems can be obtained by choosing the code rate properly. In addition, some suggestions for techniques that provide further improvement are given.

  3. Effects of bar coding on a pharmacy stock replenishment system.

    PubMed

    Chester, M I; Zilz, D A

    1989-07-01

    A bar-code stock ordering system installed in the ambulatory-care pharmacy and sterile products area of a hospital pharmacy was compared with a manual paper system to quantify overall time demands and determine the error rate associated with each system. The bar-code system was implemented in the ambulatory-care pharmacy in November 1987 and in the sterile products area in January 1988. It consists of a Trakker 9440 transaction manager with a digital scanner; labels are printed with a dot matrix printer. Electronic scanning of bar-code labels and entry of the amount required using the key-pad on the transaction manager replaced use of a preprinted form for ordering items. With the bar-code system, ordering information is transferred electronically via cable to the pharmacy inventory computer; with the manual system, this information was input by a stockroom technician. To compare the systems, the work of technicians in the ambulatory-care pharmacy and sterile products area was evaluated before and after implementation of the bar-code system. The time requirements for information gathering and data transfer were recorded by direct observation; the prevalence of errors under each system was determined by comparing unprocessed ordering information with the corresponding computer-generated "pick lists" (itemized lists including the amount of each product ordered). Time consumed in extra trips to the stockroom to replace out-of-stock items was self-reported. Significantly less time was required to order stock and transfer data to the pharmacy inventory computer with the bar-code system than with the manual system.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2757044

  4. NASTRAN as a resource in code development

    NASA Technical Reports Server (NTRS)

    Stanton, E. L.; Crain, L. M.; Neu, T. F.

    1975-01-01

    A case history is presented in which the NASTRAN system provided both guidelines and working software for use in the development of a discrete element program, PATCHES-111. To avoid duplication and to take advantage of the wide spread user familiarity with NASTRAN, the PATCHES-111 system uses NASTRAN bulk data syntax, NASTRAN matrix utilities, and the NASTRAN linkage editor. Problems in developing the program are discussed along with details on the architecture of the PATCHES-111 parametric cubic modeling system. The system includes model construction procedures, checkpoint/restart strategies, and other features.

  5. Requirements for a multifunctional code architecture

    SciTech Connect

    Tiihonen, O.; Juslin, K.

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed.

  6. A Method for Automated Program Code Testing

    ERIC Educational Resources Information Center

    Drasutis, Sigitas; Motekaityte, Vida; Noreika, Algirdas

    2010-01-01

    The Internet has recently encouraged the society to convert almost all its needs to electronic resources such as e-libraries, e-cultures, e-entertainment as well as e-learning, which has become a radical idea to increase the effectiveness of learning services in most schools, colleges and universities. E-learning can not be completely featured and…

  7. A versatile integrated block codes encoder-decoder

    NASA Astrophysics Data System (ADS)

    Laurent, P. A.

    1989-12-01

    A new Very Large Scale Integrated (VLSI) circuit which is designed to perform encoding and decoding of almost all Reed-Solomon and BCH codes (including generalized BCH) using symbol sizes from 1 to 8 bits. It is fully programmable by many standard microprocessors which consider it like any other more common co-processor. Its architecture allows a high bit rate and a great flexibility. The interfacing protocol is optimized for minimizing time constraint (mail boxes) and limiting programming effort: no advanced knowledge of codes is required to use it.

  8. A Hydrochemical Hybrid Code for Astrophysical Problems. I. Code Verification and Benchmarks for a Photon-dominated Region (PDR)

    NASA Astrophysics Data System (ADS)

    Motoyama, Kazutaka; Morata, Oscar; Shang, Hsien; Krasnopolsky, Ruben; Hasegawa, Tatsuhiko

    2015-07-01

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H2 are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  9. [Space coding: a Nobel prize diary].

    PubMed

    Rondi-Reig, Laure

    2015-02-01

    The Nobel Prize in Medecine or Physiology for 2014 has been awarded to three neuroscientists: John O'Keefe, May-Britt Moser and Edvard Moser for "their discoveries of cells that constitute a positioning system in the brain". This rewards innovative ideas which led to the development of intracerebral recording techniques in freely moving animals, thus providing links between behavior and physiology. This prize highlights how neural activity sustains our ability to localize ourselves and move around in the environment. This research provides key insights on how the brain drives behavior. PMID:25744268

  10. A Spectral Verification of the HELIOS-2 Lattice Physics Code

    SciTech Connect

    D. S. Crawford; B. D. Ganapol; D. W. Nigg

    2012-11-01

    Core modeling of the Advanced Test Reactor (ATR) at INL is currently undergoing a significant update through the Core Modeling Update Project1. The intent of the project is to bring ATR core modeling in line with today’s standard of computational efficiency and verification and validation practices. The HELIOS-2 lattice physics code2 is the lead code of several reactor physics codes to be dedicated to modernize ATR core analysis. This presentation is concerned with an independent verification of the HELIOS-2 spectral representation including the slowing down and thermalization algorithm and its data dependency. Here, we will describe and demonstrate a recently developed simple cross section generation algorithm based entirely on analytical multigroup parameters for both the slowing down and thermal spectrum. The new capability features fine group detail to assess the flux and multiplication factor dependencies on cross section data sets using the fundamental infinite medium as an example.