Science.gov

Sample records for a codes

  1. IMP: A performance code

    NASA Astrophysics Data System (ADS)

    Dauro, Vincent A., Sr.

    IMP (Integrated Mission Program) is a simulation language and code used to model present and future Earth, Moon, or Mars missions. The profile is user controlled through selection from a large menu of events and maneuvers. A Fehlberg 7/13 Runge-Kutta integrator with error and step size control is used to numerically integrate the differential equations of motion (DEQ) of three spacecraft, a main, a target, and an observer. Through selection, the DEQ's include guided thrust, oblate gravity, atmosphere drag, solar pressure, and Moon gravity effects. Guide parameters for thrust events and performance parameters of velocity changes (Delta-V) and propellant usage (maximum of five systems) are developed as needed. Print, plot, summary, and debug files are output.

  2. Combinatorial neural codes from a mathematical coding theory perspective.

    PubMed

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli. PMID:23724797

  3. A coded tracking telemetry system

    USGS Publications Warehouse

    Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.

    1989-01-01

    We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.

  4. AEDS Property Classification Code Manual.

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…

  5. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. PMID:24698943

  6. SLINGSHOT - a Coilgun Design Code

    SciTech Connect

    MARDER, BARRY M.

    2001-09-01

    The Sandia coilgun [1,2,3,4,5] is an inductive electromagnetic launcher. It consists of a sequence of powered, multi-turn coils surrounding a flyway of circular cross-section through which a conducting armature passes. When the armature is properly positioned with respect to a coil, a charged capacitor is switched into the coil circuit. The rising coil currents induce a current in the armature, producing a repulsive accelerating force. The basic numerical tool for modeling the coilgun is the SLINGSHOT code, an expanded, user-friendly successor to WARP-10 [6]. SLINGSHOT computes the currents in the coils and armature, finds the forces produced by those currents, and moves the armature through the array of coils. In this approach, the cylindrically symmetric coils and armature are subdivided into concentric hoops with rectangular cross-section, in each of which the current is assumed to be uniform. The ensemble of hoops are treated as coupled circuits. The specific heats and resistivities of the hoops are found as functions of temperature and used to determine the resistive heating. The code calculates the resistances and inductances for all hoops, and the mutual inductances for all hoop pairs. Using these, it computes the hoop currents from their circuit equations, finds the forces from the products of these currents and the mutual inductance gradient, and moves the armature. Treating the problem as a set of coupled circuits is a fast and accurate approach compared to solving the field equations. Its use, however, is restricted to problems in which the symmetry dictates the current paths. This paper is divided into three parts. The first presents a demonstration of the code. The second describes the input and output. The third part describes the physical models and numerical methods used in the code. It is assumed that the reader is familiar with coilguns.

  7. The Proteomic Code: a molecular recognition code for proteins

    PubMed Central

    Biro, Jan C

    2007-01-01

    Background The Proteomic Code is a set of rules by which information in genetic material is transferred into the physico-chemical properties of amino acids. It determines how individual amino acids interact with each other during folding and in specific protein-protein interactions. The Proteomic Code is part of the redundant Genetic Code. Review The 25-year-old history of this concept is reviewed from the first independent suggestions by Biro and Mekler, through the works of Blalock, Root-Bernstein, Siemion, Miller and others, followed by the discovery of a Common Periodic Table of Codons and Nucleic Acids in 2003 and culminating in the recent conceptualization of partial complementary coding of interacting amino acids as well as the theory of the nucleic acid-assisted protein folding. Methods and conclusions A novel cloning method for the design and production of specific, high-affinity-reacting proteins (SHARP) is presented. This method is based on the concept of proteomic codes and is suitable for large-scale, industrial production of specifically interacting peptides. PMID:17999762

  8. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  9. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  10. Why comply with a code of ethics?

    PubMed

    Spielthenner, Georg

    2015-05-01

    A growing number of professional associations and occupational groups are creating codes of ethics with the goal of guiding their members, protecting service users, and safeguarding the reputation of the profession. There is a great deal of literature dealing with the question to what extent ethical codes can achieve their desired objectives. The present paper does not contribute to this debate. Its aim is rather to investigate how rational it is to comply with codes of conduct. It is natural and virtually inevitable for a reflective person to ask why one should pay any attention to ethical codes, in particular if following a code is not in one's own interest. In order to achieve the aim of this paper, I shall (in "Quasi-reasons for complying with an ethical code" section) discuss reasons that only appear to be reasons for complying with a code. In "Code-independent reasons" section, I shall present genuine practical reasons that, however, turn out to be reasons of the wrong kind. In "Code-dependent reasons" section finally presents the most important reasons for complying with ethical codes. The paper argues that while ethical codes do not necessarily yield reasons for action, professionals can have genuine reasons for complying with a code, which may, however, be rather weak and easily overridden by reasons for deviating from the code. PMID:25185873

  11. The chromatin regulatory code: Beyond a histone code

    NASA Astrophysics Data System (ADS)

    Lesne, A.

    2006-03-01

    In this commentary on the contribution by Arndt Benecke in this issue, I discuss why the notion of “chromatin code” introduced and elaborated in this paper is to be preferred to that of “histone code”. Speaking of a code as regards nucleosome conformation and histone tail post-translational modifications only makes sense within the chromatin fiber, where their physico-chemical features can be translated into regulatory programs at the genome level, by means of a complex, multi-level interplay with the fiber architecture and dynamics settled in the course of Evolution. In particular, this chromatin code presumably exploits allosteric transitions of the chromatin fiber. The chromatin structure dependence of its translation suggests two alternative modes of transcription initiation regulation, also proposed in the paper by A. Benecke in this issue for interpreting strikingly bimodal micro-array data.

  12. A Better Handoff for Code Officials

    SciTech Connect

    Conover, David R.; Yerkes, Sara

    2010-09-24

    The U.S. Department of Energy's Building Energy Codes Program has partnered with ICC to release the new Building Energy Codes Resource Guide: Code Officials Edition. We created this binder of practical materials for a simple reason: code officials are busy learning and enforcing several codes at once for the diverse buildings across their jurisdictions. This doesn’t leave much time to search www.energycodes.gov, www.iccsafe.org, or the range of other helpful web-based resources for the latest energy codes tools, support, and information. So, we decided to bring the most relevant materials to code officials in a way that works best with their daily routine, and point to where they can find even more. Like a coach’s game plan, the Resource Guide is an "energy playbook" for code officials.

  13. A Mathematical Representation of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Hill, Vanessa J.; Rowlands, Peter

    Algebraic and geometric representations of the genetic code are used to show their functions in coding for amino acids. The algebra is a 64-part vector quaternion combination, and the geometry is based on the structure of the regular icosidodecahedron. An almost perfect pattern suggests that this is a biologically significant way of representing the genetic code.

  14. SPINK, A Thin Elements Spin Tracking Code

    SciTech Connect

    Luccio, Alfredo U.

    2009-08-04

    Spink is a spin tracking code for spin polarized particles. The code tracks both trajectories in 3D and spin. It works using thick element modeling from MAD and thin element modeling based on the BMT equation to track spin. The code is written in Fortran and typically runs on a Linux platform, either sequentially or MPI-parallel.

  15. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).

  16. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  17. A code of professional conduct for members.

    PubMed

    2006-09-01

    In light of new legislation and changing practice, together with the impending legal status of members who practise clinical photography and/or clinical videography, the Institute of Medical Illustrators (IMI) has revised and brought together A Code of Responsible Practice and its Code of Conduct. The new document, A Code of Professional Conduct for Members, details the standards required to maintain professional practice. Within the text, the Code refers to members, and where specifically appropriate, to clinical photographers. The title, 'clinical photographer', is used where the code applies to members practising clinical photography and/or videography. PMID:17162339

  18. The VISC code: A user's manual

    NASA Technical Reports Server (NTRS)

    Wilson, K.

    1973-01-01

    The VISC code is a computer automated scheme for solving the equations describing the fully coupled viscous, radiating flow at the stagnation-point of a blunt body which may or may not be ablating. The code provides a basis for obtaining prediction of the stagnation-point heating to a body entering any planetary atmosphere at hyperbolic velocities. The code is written in FORTRAN V and is operational on both the Univac 1108 (EXEC 8) system and the CDC 7600 system. The report gives an overview of the VISC code computational logic flow, a description of the input requirements and output results and comments on the practical use of the code. As such the report forms a users manual for operation of the VISC code.

  19. SL4 code - A user's manual

    NASA Technical Reports Server (NTRS)

    Chou, Y. S.

    1973-01-01

    The SL-4 code is a computer automated scheme for solving the equations describing the fully-coupled viscous, radiating flow over the front face of a blunt body which may or may not be ablating. The code provides a basis for obtaining predictions of the surface beating to a body entering any planetary atmosphere at hyperbolic velocities. The code is written in FORTRAN V and is operational on both the Univac 1108 (EXEC 8) system in use at LMSC and the CDC 7600 system in use at the University of California, Berkeley. An overview of the SL-4 code computational logic flow, a description of the input requirements and output results, and comments on the practical use of the code are presented. As such this report forms a users manual for operation of the SL-4 code.

  20. HERCULES: A Pattern Driven Code Transformation System

    SciTech Connect

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing; Ilsche, Thomas; Joubert, Wayne; Graham, Richard L

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss the design, implementation and an initial evaluation of HERCULES.

  1. Towards a testbed for malicious code detection

    SciTech Connect

    Lo, R.; Kerchen, P.; Crawford, R.; Ho, W.; Crossley, J.; Fink, G.; Levitt, K.; Olsson, R.; Archer, M. . Div. of Computer Science)

    1991-01-01

    This paper proposes an environment for detecting many types of malicious code, including computer viruses, Trojan horses, and time/logic bombs. This malicious code testbed (MCT) is based upon both static and dynamic analysis tools developed at the University of California, Davis, which have been shown to be effective against certain types of malicious code. The testbed extends the usefulness of these tools by using them in a complementary fashion to detect more general cases of malicious code. Perhaps more importantly, the MCT allows administrators and security analysts to check a program before installation, thereby avoiding any damage a malicious program might inflict. 5 refs., 2 figs., 2 tabs.

  2. A Code of Practice for Further Education.

    ERIC Educational Resources Information Center

    Walker, Liz; Turner, Anthea

    This draft is the outcome of a project in which colleges and further education (FE) teacher education providers worked to pilot a code developed by students and staff at Loughborough College in England. The code is intended to be a resource for improving practice and enhancing the standing of the FE sector. It focuses on the essentials, affirms…

  3. MHDust: A 3-fluid dusty plasma code

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel

    MHDust is a next generation 3-fluid magnetized dusty plasma code, treating the inertial dynamics of both the dust and ion components. Coded in ANSI C, the numerical method employs Leap-Frog and Dufort-Frankel integration schemes. Features include: nonlinear collisional terms, quasi-neutrality or continuity based electron densities, and dynamical dust charge number. Tests of wave-mode propagation (Acoustic and Electromagnetic) allow a comparison to linear wave mode theory. Additional nonlinear phenomena are presented including magnetic reconnection and shear-flow instabilities. Relevant parameters for the space environment are considered, allowing a comparison to be made with previous dusty plasma codes (DENISIS). The utility of the code is expanded through the possibility of small dust mass. This allows MH- Dust to be used as a 2-ion plasma code. MHDust considerably expands the range of numerical investigations into nonlinear phenomena in the field of astrophysical dusty plasmas.

  4. A distributed particle simulation code in C++

    SciTech Connect

    Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.

    1992-01-01

    Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.

  5. Source Code Plagiarism--A Student Perspective

    ERIC Educational Resources Information Center

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  6. The Nuremberg Code-A critique.

    PubMed

    Ghooi, Ravindra B

    2011-04-01

    The Nuremberg Code drafted at the end of the Doctor's trial in Nuremberg 1947 has been hailed as a landmark document in medical and research ethics. Close examination of this code reveals that it was based on the Guidelines for Human Experimentation of 1931. The resemblance between these documents is uncanny. It is unfortunate that the authors of the Nuremberg Code passed it off as their original work. There is evidence that the defendants at the trial did request that their actions be judged on the basis of the 1931 Guidelines, in force in Germany. The prosecutors, however, ignored the request and tried the defendants for crimes against humanity, and the judges included the Nuremberg Code as a part of the judgment. Six of ten principles in Nuremberg Code are derived from the 1931 Guidelines, and two of four newly inserted principles are open to misinterpretation. There is little doubt that the Code was prepared after studying the Guidelines, but no reference was made to the Guidelines, for reasons that are not known. Using the Guidelines as a base document without giving due credit is plagiarism; as per our understanding of ethics today, this would be considered unethical. The Nuremberg Code has fallen by the wayside; since unlike the Declaration of Helsinki, it is not regularly reviewed and updated. The regular updating of some ethics codes is evidence of the evolving nature of human ethics. PMID:21731859

  7. A new algorithm for coding geological terminology

    NASA Astrophysics Data System (ADS)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  8. A Fortran 90 code for magnetohydrodynamics

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  9. Report on a workshop concerning code validation

    SciTech Connect

    1996-12-01

    The design of wind turbine components is becoming more critical as turbines become lighter and more dynamically active. Computer codes that will reliably predict turbine dynamic response are, therefore, more necessary than before. However, predicting the dynamic response of very slender rotating structures that operate in turbulent winds is not a simple matter. Even so, codes for this purpose have been developed and tested in North America and in Europe, and it is important to disseminate information on this subject. The purpose of this workshop was to allow those involved in the wind energy industry in the US to assess the progress invalidation of the codes most commonly used for structural/aero-elastic wind turbine simulation. The theme of the workshop was, ``How do we know it`s right``? This was the question that participants were encouraged to ask themselves throughout the meeting in order to avoid the temptation of presenting information in a less-than-critical atmosphere. Other questions posed at the meeting are: What is the proof that the codes used can truthfully represent the field data? At what steps were the codes tested against known solutions, or against reliable field data? How should the designer or user validate results? What computer resources are needed? How do codes being used in Europe compare with those used in the US? How does the code used affect industry certification? What can be expected in the future?

  10. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.

    1991-01-01

    We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  11. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  12. EMPIRE: A code for nuclear astrophysics

    NASA Astrophysics Data System (ADS)

    Palumbo, A.

    2016-01-01

    The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the direct- semidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.

  13. A Subband Coding Method for HDTV

    NASA Technical Reports Server (NTRS)

    Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.

  14. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Lin, S.

    1985-01-01

    A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.

  15. Predictive coding as a model of cognition.

    PubMed

    Spratling, M W

    2016-08-01

    Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562

  16. MACRAD: A mass analysis code for radiators

    SciTech Connect

    Gallup, D.R.

    1988-01-01

    A computer code to estimate and optimize the mass of heat pipe radiators (MACRAD) is currently under development. A parametric approach is used in MACRAD, which allows the user to optimize radiator mass based on heat pipe length, length to diameter ratio, vapor to wick radius, radiator redundancy, etc. Full consideration of the heat pipe operating parameters, material properties, and shielding requirements is included in the code. Preliminary results obtained with MACRAD are discussed.

  17. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  18. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  19. A thesaurus for a neural population code

    PubMed Central

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-01-01

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI: http://dx.doi.org/10.7554/eLife.06134.001 PMID:26347983

  20. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  1. MININEC: A mini-numerical electromagnetics code

    NASA Astrophysics Data System (ADS)

    Julian, A. J.; Logan, J. C.; Rockway, J. W.

    1982-09-01

    An investigation of the merits of techniques that may result in a reduced version of an antenna modeling code applicable to small problems and small computer resources. The result is the identification of one promising numerical approach suggested by Dr DR Wilton of the University of Mississippi. The approach has been coded in BASIC and implemented on a microcomputer. The computer code has been dubbed MININEC (Mini-Numerical Electromagnetics Code). MININEC solves an integral equation relating the electric field and the vector and scalar potentials. The solution involves a modified Galerkin procedure. This formulation results in a compact code suitable for use on a microcomputer. MININEC solves for the impedance and currents on arbitrarily oriented wires including configurations with multiple junctions. Options include lumped impedance loading and far field patterns. MININEC has been written in the BASIC language compatible with many popular microcomputers. MININEC has been implemented on the NOSC Univac 1100/82, the NOSC VAX, a CDI microcomputer, and an Apple microcomputer.

  2. Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom

    ERIC Educational Resources Information Center

    Lin, Zheng

    2012-01-01

    This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…

  3. FREEFALL: A seabed penetrator flight code

    SciTech Connect

    Hickerson, J.

    1988-01-01

    This report presents a one-dimensional model and computer program for predicting the motion of seabed penetrators. The program calculates the acceleration, velocity, and depth of a penetrator as a function of time from the moment of launch until the vehicle comes to rest in the sediment. The code is written in Pascal language for use on a small personal computer. Results are presented as printed tables and graphs. A comparison with experimental data is given which indicates that the accuracy of the code is perhaps as good as current techniques for measuring vehicle performance. 31 refs., 12 figs., 5 tabs.

  4. Student Codes of Conduct: A Guide to Policy Review and Code Development.

    ERIC Educational Resources Information Center

    New Jersey State Dept. of Education, Trenton. Div. of General Academic Education.

    Designed to assist New Jersey school districts in developing and implementing student codes of conduct, this document begins by examining the need for policy and clearly established rules, the rationale for codes of conduct, and the areas that such codes should address. Following a discussion of substantive and procedural rights and sources of…

  5. DUNE - a granular flow code

    SciTech Connect

    Slone, D M; Cottom, T L; Bateson, W B

    2004-11-23

    DUNE was designed to accurately model the spectrum of granular. Granular flow encompasses the motions of discrete particles. The particles are macroscopic in that there is no Brownian motion. The flow can be thought of as a dispersed phase (the particles) interacting with a fluid phase (air or water). Validation of the physical models proceeds in tandem with simple experimental confirmation. The current development team is working toward the goal of building a flexible architecture where existing technologies can easily be integrated to further the capability of the simulation. We describe the DUNE architecture in some detail using physics models appropriate for an imploding liner experiment.

  6. Should managers have a code of conduct?

    PubMed

    Bayliss, P

    1994-02-01

    Much attention is currently being given to values and ethics in the NHS. Issues of accountability are being explored as a consequence of the Cadbury report. The Institute of Health Services Management (IHSM) is considering whether managers should have a code of ethics. Central to this issue is what managers themselves think; the application of such a code may well stand or fall by whether managers are prepared to have ownership of it, and are prepared to make it work. Paul Bayliss reports on a survey of managers' views. PMID:10134423

  7. Code Blue: a family matter?

    PubMed

    Goforth, Rhonda

    2013-01-01

    The focus of this article is to encourage nurses and other healthcare staff to allow family members to be present during a resuscitation event. The author offers rationale, history, and simple guidelines for supporting families during this excruciating experience. PMID:23607158

  8. Performance of some block codes on a Gaussian channel

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.; Mceliece, R. J.

    1975-01-01

    A technique proposed by Chase (1972) is used to evaluate the performance of several fairly long binary block codes on a wideband additive Gaussian channel. Considerations leading to the use of Chase's technique are discussed. Chase's concepts are first applied to the most powerful practical class of binary codes, the BCH codes with Berlekamp's (1972) decoding algorithm. Chase's algorithm is then described along with proposed selection of candidate codes. Results are presented of applying Chase's algorithm to four binary codes: (23, 12) Golay code, (32, 16) second-order Reed-Muller code, (63, 36) 5-error correcting BCH code, and (95, 39) 9-error correcting shortened BCH code. It is concluded that there are many block codes of length not exceeding 100 with extremely attractive maximum likelihood decoding performance on a Gaussian channel. BCH codes decoded via Berlekamp's binary decoding algorithm and Chase's idea are close to being practical competitors to short-constraint length convolutional codes with Viterbi decoding.

  9. A Code of Ethics for Democratic Leadership

    ERIC Educational Resources Information Center

    Molina, Ricardo; Klinker, JoAnn Franklin

    2012-01-01

    Democratic leadership rests on sacred values, awareness, judgement, motivation and courage. Four turning points in a 38-year school administrator's career revealed decision-making in problematic moments stemmed from values in a personal and professional code of ethics. Reflection on practice and theory added vocabulary and understanding to make…

  10. Finding the key to a better code: code team restructure to improve performance and outcomes.

    PubMed

    Prince, Cynthia R; Hines, Elizabeth J; Chyou, Po-Huang; Heegeman, David J

    2014-09-01

    Code teams respond to acute life threatening changes in a patient's status 24 hours a day, 7 days a week. If any variable, whether a medical skill or non-medical quality, is lacking, the effectiveness of a code team's resuscitation could be hindered. To improve the overall performance of our hospital's code team, we implemented an evidence-based quality improvement restructuring plan. The code team restructure, which occurred over a 3-month period, included a defined number of code team participants, clear identification of team members and their primary responsibilities and position relative to the patient, and initiation of team training events and surprise mock codes (simulations). Team member assessments of the restructured code team and its performance were collected through self-administered electronic questionnaires. Time-to-defibrillation, defined as the time the code was called until the start of defibrillation, was measured for each code using actual time recordings from code summary sheets. Significant improvements in team member confidence in the skills specific to their role and clarity in their role's position were identified. Smaller improvements were seen in team leadership and reduction in the amount of extra talking and noise during a code. The average time-to-defibrillation during real codes decreased each year since the code team restructure. This type of code team restructure resulted in improvements in several areas that impact the functioning of the team, as well as decreased the average time-to-defibrillation, making it beneficial to many, including the team members, medical institution, and patients. PMID:24667218

  11. Materials management with a bar code reader.

    PubMed

    Kaplan, R S

    1990-01-01

    A materials management system capable of inventory control, accounting and the automatic recording of supplies for a clinical department has been developed for the George Washington University Hospital Department of Anesthesia. This system combines a microprocessor-based computer for data storage and a hand-held bar code reader to record the bar code scan of each item in the inventory. A relational software program with easy-to-use menus and help keys was written. Bar code information stored for each item includes item number, quantity, date and time of issue. Accumulated bar code scans are loaded into the computer by use of a serial port and then used to update current inventory in the computer. Comparison between current inventory and reorder levels by the computer will initiate automatic printing of appropriate purchase orders. Reorder levels are adjusted regularly, by comparing previous year or month usage to current needs; items already on order, items on back order and delivery lag time are also taken into account. PMID:10104851

  12. TACO: a finite element heat transfer code

    SciTech Connect

    Mason, W.E. Jr.

    1980-02-01

    TACO is a two-dimensional implicit finite element code for heat transfer analysis. It can perform both linear and nonlinear analyses and can be used to solve either transient or steady state problems. Either plane or axisymmetric geometries can be analyzed. TACO has the capability to handle time or temperature dependent material properties and materials may be either isotropic or orthotropic. A variety of time and temperature dependent loadings and boundary conditions are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additionally, TACO has some specialized features such as internal surface conditions (e.g., contact resistance), bulk nodes, enclosure radiation with view factor calculations, and chemical reactive kinetics. A user subprogram feature allows for any type of functional representation of any independent variable. A bandwidth and profile minimization option is also available in the code. Graphical representation of data generated by TACO is provided by a companion post-processor named POSTACO. The theory on which TACO is based is outlined, the capabilities of the code are explained, the input data required to perform an analysis with TACO are described. Some simple examples are provided to illustrate the use of the code.

  13. FLUKA: A Multi-Particle Transport Code

    SciTech Connect

    Ferrari, A.; Sala, P.R.; Fasso, A.; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  14. Building a Hydrodynamics Code with Kinetic Theory

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Bauer, Wolfgang; Colbry, Dirk; Pickett, Rodney; Strother, Terrance

    2013-08-01

    We report on the development of a test-particle based kinetic Monte Carlo code for large systems and its application to simulate matter in the continuum regime. Our code combines advantages of the Direct Simulation Monte Carlo and the Point-of-Closest-Approach methods to solve the collision integral of the Boltzmann equation. With that, we achieve a high spatial accuracy in simulations while maintaining computational feasibility when applying a large number of test-particles. The hybrid setup of our approach allows us to study systems which move in and out of the hydrodynamic regime, with low and high particle densities. To demonstrate our code's ability to reproduce hydrodynamic behavior we perform shock wave simulations and focus here on the Sedov blast wave test. The blast wave problem describes the evolution of a spherical expanding shock front and is an important verification problem for codes which are applied in astrophysical simulation, especially for approaches which aim to study core-collapse supernovae.

  15. CHEETAH: A next generation thermochemical code

    SciTech Connect

    Fried, L.; Souers, P.

    1994-11-01

    CHEETAH is an effort to bring the TIGER thermochemical code into the 1990s. A wide variety of improvements have been made in Version 1.0. We have improved the robustness and ease of use of TIGER. All of TIGER`s solvers have been replaced by new algorithms. We find that CHEETAH solves a wider variety of problems with no user intervention (e.g. no guesses for the C-J state) than TIGER did. CHEETAH has been made simpler to use than TIGER; typical use of the code occurs with the new standard run command. CHEETAH will make the use of thermochemical codes more attractive to practical explosive formulators. We have also made an extensive effort to improve over the results of TIGER. CHEETAH`s version of the BKW equation of state (BKWC) is able to accurately reproduce energies from cylinder tests; something that other BKW parameter sets have been unable to do. Calculations performed with BKWC execute very quickly; typical run times are under 10 seconds on a workstation. In the future we plan to improve the underlying science in CHEETAH. More accurate equations of state will be used in the gas and the condensed phase. A kinetics capability will be added to the code that will predict reaction zone thickness. Further ease of use features will eventually be added; an automatic formulator that adjusts concentrations to match desired properties is planned.

  16. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  17. A Germanium-Based, Coded Aperture Imager

    SciTech Connect

    Ziock, K P; Madden, N; Hull, E; William, C; Lavietes, T; Cork, C

    2001-10-31

    We describe a coded-aperture based, gamma-ray imager that uses a unique hybrid germanium detector system. A planar, germanium strip detector, eleven millimeters thick is followed by a coaxial detector. The 19 x 19 strip detector (2 mm pitch) is used to determine the location and energy of low energy events. The location of high energy events are determined from the location of the Compton scatter in the planar detector and the energy is determined from the sum of the coaxial and planar energies. With this geometry, we obtain useful quantum efficiency in a position-sensitive mode out to 500 keV. The detector is used with a 19 x 17 URA coded aperture to obtain spectrally resolved images in the gamma-ray band. We discuss the performance of the planar detector, the hybrid system and present images taken of laboratory sources.

  18. Towards a biological coding theory discipline.

    SciTech Connect

    May, Elebeoba Eni

    2003-09-01

    How can information required for the proper functioning of a cell, an organism, or a species be transmitted in an error-introducing environment? Clearly, similar to engineering communication systems, biological systems must incorporate error control in their information transmissino processes. if genetic information in the DNA sequence is encoded in a manner similar to error control encoding, the received sequence, the messenger RNA (mRNA) can be analyzed using coding theory principles. This work explores potential parallels between engineering communication systems and the central dogma of genetics and presents a coding theory approach to modeling the process of protein translation initiation. The messenger RNA is viewed as a noisy encoded sequence and the ribosoe as an error control decoder. Decoding models based on chemical and biological characteristics of the ribosome and the ribosome binding site of the mRNA are developed and results of applying the models to the Escherichia coli K-12 are presented.

  19. CAFE: A New Relativistic MHD Code

    NASA Astrophysics Data System (ADS)

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.

    2015-06-01

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  20. TAIR: A transonic airfoil analysis computer code

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.

    1981-01-01

    The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.

  1. LEGO: A modular accelerator design code

    SciTech Connect

    Cai, Y.; Donald, M.; Irwin, J.; Yan, Y.

    1997-08-01

    An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors: TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in the three dimensional space. Several symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and nonlinear case. Currently, the code is used to design and simulate the lattices of the PEP-II. It will also be used for the commissioning.

  2. Xenomicrobiology: a roadmap for genetic code engineering.

    PubMed

    Acevedo-Rocha, Carlos G; Budisa, Nediljko

    2016-09-01

    Biology is an analytical and informational science that is becoming increasingly dependent on chemical synthesis. One example is the high-throughput and low-cost synthesis of DNA, which is a foundation for the research field of synthetic biology (SB). The aim of SB is to provide biotechnological solutions to health, energy and environmental issues as well as unsustainable manufacturing processes in the frame of naturally existing chemical building blocks. Xenobiology (XB) goes a step further by implementing non-natural building blocks in living cells. In this context, genetic code engineering respectively enables the re-design of genes/genomes and proteins/proteomes with non-canonical nucleic (XNAs) and amino (ncAAs) acids. Besides studying information flow and evolutionary innovation in living systems, XB allows the development of new-to-nature therapeutic proteins/peptides, new biocatalysts for potential applications in synthetic organic chemistry and biocontainment strategies for enhanced biosafety. In this perspective, we provide a brief history and evolution of the genetic code in the context of XB. We then discuss the latest efforts and challenges ahead for engineering the genetic code with focus on substitutions and additions of ncAAs as well as standard amino acid reductions. Finally, we present a roadmap for the directed evolution of artificial microbes for emancipating rare sense codons that could be used to introduce novel building blocks. The development of such xenomicroorganisms endowed with a 'genetic firewall' will also allow to study and understand the relation between code evolution and horizontal gene transfer. PMID:27489097

  3. SORD: A New Rupture Dynamics Modeling Code

    NASA Astrophysics Data System (ADS)

    Ely, G.; Minster, B.; Day, S.

    2005-12-01

    We report on our progress in validating our rupture dynamics modeling code, capable of dealing with nonplanar faults and surface topography. The method uses a "mimetic" approach to model spontaneous rupture on a fault within a 3D isotropic anelastic solid, wherein the equations of motion are approximated with a second order Support-Operator method on a logically rectangular mesh. Grid cells are not required to be parallelepipeds, however, so that non-rectangular meshes can be supported to model complex regions. However, for areas in the mesh which are in fact rectangular, the code uses a streamlined version of the algorithm that takes advantage of the simplifications of the operators in such areas. The fault itself is modeled using a double node technique, and the rheology on the fault surface is modeled through a slip-weakening, frictional, internal boundary condition. The Support Operator Rupture Dynamics (SORD) code, was prototyped in MATLAB, and all algorithms have been validated against known (including analytical solutions, eg Kostrov, 1964) solutions or previously validated solutions. This validation effort is conducted in the context of the SCEC Dynamic Rupture model validation effort led by R. Archuleta and R. Harris. Absorbing boundaries at the model edges are handled using the perfectly matched layers method (PML) (Olsen & Marcinkovich, 2003). PML is shown to work extremely well on rectangular meshes. We show that our implementation is also effective on non-rectangular meshes under the restriction that the boundary be planar. For validation of the model we use a variety of test cases using two types of meshes: a rectangular mesh and skewed mesh. The skewed mesh amplifies any biases caused by the Support-Operator method on non-rectangular elements. Wave propagation and absorbing boundaries are tested with a spherical wave source. Rupture dynamics on a planar fault are tested against (1) a Kostrov analytical solution, (2) data from foam rubber scale models

  4. A computer analysis program for interfacing thermal and structural codes

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.; Maffeo, R. J.

    1985-01-01

    A software package has been developed to transfer three-dimensional transient thermal information accurately, efficiently, and automatically from a heat transfer analysis code to a structural analysis code. The code is called three-dimensional TRansfer ANalysis Code to Interface Thermal and Structural codes, or 3D TRANCITS. TRANCITS has the capability to couple finite difference and finite element heat transfer analysis codes to linear and nonlinear finite element structural analysis codes. TRANCITS currently supports the output of SINDA and MARC heat transfer codes directly. It will also format the thermal data output directly so that it is compatible with the input requirements of the NASTRAN and MARC structural analysis codes. Other thermal and structural codes can be interfaced using the transfer module with the neutral heat transfer input file and the neutral temperature output file. The transfer module can handle different elemental mesh densities for the heat transfer analysis and the structural analysis.

  5. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  6. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  7. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  8. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    SciTech Connect

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  9. CORA - A Semiautomatic Coding System Application to the Coding of Markush Formulas

    ERIC Educational Resources Information Center

    Deforeit, Huguette; And Others

    1972-01-01

    A computer system, named CORA, has been devised for coding chemical structures by fragmentation elements. It has been used to encode Markush formulas in patents according to the Ring codes used in the Ringdoc and Pestdoc services and results in an easy, speedy, reliable and inexpensive method. (4 references) (Author)

  10. LOWTHRM: a thermal fluence code. Master's thesis

    SciTech Connect

    Westbrook, C.R.

    1980-03-01

    A Fortran computer program LOWTERM is described for calculating nuclear thermal fluence incident upon a target area. Atmospheric transmissivity factors in the spectral region 0.25 to 28.5 microns are determined through use of the LOWTRAN5 computer code. The program provides a choice of six model atmospheres covering seasonal and latitudinal variations from sea level to 100 km, eight haze models, and accounts for molecular absorption, molecular scattering, and aerosol extinction. Atmospheric refraction, earth curvature effects, thermal scattering, and thermal ground reflection contributions are included.

  11. Visual mismatch negativity: a predictive coding view.

    PubMed

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control "refractoriness" effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  12. Visual mismatch negativity: a predictive coding view

    PubMed Central

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control “refractoriness” effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  13. A Code of Ethics for Referees?

    NASA Astrophysics Data System (ADS)

    Sturrock, Peter A.

    2004-04-01

    I have read with interest the many letters commenting on the pros and cons of anonymity for referees. While I sympathize with writers who have suffered from referees who are incompetent or uncivil, I also sympathize with those who argue that one would simply exchange one set of problems for another if journals were to require that all referees waive anonymity. Perhaps there is a more direct way to address the issue. It may help if guidelines for referees were to include a code of ethics.

  14. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 2; Codes for AWGN and Fading Channels

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, DoJun; Lin, Shu

    1997-01-01

    In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.

  15. CHEETAH: A fast thermochemical code for detonation

    SciTech Connect

    Fried, L.E.

    1993-11-01

    For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producing a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.

  16. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel Aaron Lazerson

    2012-07-27

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The codes is validated against a vacuum shot on the Large Helical Device where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the Large Helical Device (LHD).

  17. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel A. Lazerson, S. Sakakibara and Y. Suzuki

    2013-03-12

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The code is validated against a vacuum shot on the Large Helical Device (LHD) where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the LHD.

  18. A Construction of Lossy Source Code Using LDPC Matrices

    NASA Astrophysics Data System (ADS)

    Miyake, Shigeki; Muramatsu, Jun

    Research into applying LDPC code theory, which is used for channel coding, to source coding has received a lot of attention in several research fields such as distributed source coding. In this paper, a source coding problem with a fidelity criterion is considered. Matsunaga et al. and Martinian et al. constructed a lossy code under the conditions of a binary alphabet, a uniform distribution, and a Hamming measure of fidelity criterion. We extend their results and construct a lossy code under the extended conditions of a binary alphabet, a distribution that is not necessarily uniform, and a fidelity measure that is bounded and additive and show that the code can achieve the optimal rate, rate-distortion function. By applying a formula for the random walk on lattice to the analysis of LDPC matrices on Zq, where q is a prime number, we show that results similar to those for the binary alphabet condition hold for Zq, the multiple alphabet condition.

  19. Containment Fire Simulation by a CFD Code

    SciTech Connect

    Heitsch, Matthias

    2002-07-01

    In the frame of an international collaborative project to evaluate fire models a code benchmark was initiated to better quantify the strengths and weaknesses of the codes involved. CFX has been applied to simulate selected cases of both parts of the benchmark. These simulations are presented and discussed in this paper. In the first part of the benchmark a pool fire just represented by a heat release table is considered. Consequently, the physical fire model within CFX is simple. Radiative heat exchange together with turbulent mixing are involved. Two cases with and without venting of the fire room are compared. The second part of the benchmark requires a more detailed fire model in order to inspect the availability of oxygen locally and to control the fire intensity. Under unvented conditions oxygen starvation is encountered and the fire oscillates. Mechanical ventilation changes this behavior and provides enough oxygen all over the simulation time. The predefined damage criteria to characterize, if a target cable in the fire room would be damaged, are not met. However, surface temperatures predicted are well above the assumed threshold temperatures. A continuation of the work presented is foreseen and will address a more complex physical modeling of relevant fire scenarios. (author)

  20. AMBER: a PIC slice code for DARHT

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Fawley, William

    1999-11-01

    The accelerator for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will produce a 4-kA, 20-MeV, 2-μ s output electron beam with a design goal of less than 1000 π mm-mrad normalized transverse emittance and less than 0.5-mm beam centroid motion. In order to study the beam dynamics throughout the accelerator, we have developed a slice Particle-In-Cell code named AMBER, in which the beam is modeled as a time-steady flow, subject to self, as well as external, electrostatic and magnetostatic fields. The code follows the evolution of a slice of the beam as it propagates through the DARHT accelerator lattice, modeled as an assembly of pipes, solenoids and gaps. In particular, we have paid careful attention to non-paraxial phenomena that can contribute to nonlinear forces and possible emittance growth. We will present the model and the numerical techniques implemented, as well as some test cases and some preliminary results obtained when studying emittance growth during the beam propagation.

  1. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  2. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  3. Python interface generator for Fortran based codes (a code development aid)

    SciTech Connect

    Grote, D. P.

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  4. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2010-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  5. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2011-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  6. What to do with a Dead Research Code

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  7. EUNHA: a New Cosmological Hydrodynamic Simulation Code

    NASA Astrophysics Data System (ADS)

    Shin, Jihye; Kim, Juhan; Kim, Sungsoo S.; Park, Changbom

    2014-06-01

    We develop a parallel cosmological hydrodynamic simulation code designed for the study of formation and evolution of cosmological structures. The gravitational force is calculated using the TreePM method and the hydrodynamics is implemented based on the smoothed particle hydrodynamics. The initial displacement and velocity of simulation particles are calculated according to second-order Lagrangian perturbation theory using the power spectra of dark matter and baryonic matter. The initial background temperature is given by Recfast and the temperature fluctuations at the initial particle position are assigned according to the adiabatic model. We use a time-limiter scheme over the individual time steps to capture shock-fronts and to ease the time-step tension between the shock and preshock particles. We also include the astrophysical gas processes of radiative heating/cooling, star formation, metal enrichment, and supernova feedback. We test the code in several standard cases such as one-dimensional Riemann problems, Kelvin-Helmholtz, and Sedov blast wave instability. Star formation on the galactic disk is investigated to check whether the Schmidt-Kennicutt relation is properly recovered. We also study global star formation history at different simulation resolutions and compare them with observations.

  8. A minimum-error, energy-constrained neural code is an instantaneous-rate code.

    PubMed

    Johnson, Erik C; Jones, Douglas L; Ratnam, Rama

    2016-04-01

    Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals. PMID:26922680

  9. Thinking through the Issues in a Code of Ethics

    ERIC Educational Resources Information Center

    Davis, Michael

    2008-01-01

    In June 2005, seven people met at the Illinois Institute of Technology (IIT) to develop a code of ethics governing all members of the university community. The initial group developed a preamble, that included reasons for establishing such a code and who was to be governed by the code, including rationale for following the guidelines. From this…

  10. A burst-correcting algorithm for Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Chen, J.; Owsley, P.

    1990-01-01

    The Bose, Chaudhuri, and Hocquenghem (BCH) codes form a large class of powerful error-correcting cyclic codes. Among the non-binary BCH codes, the most important subclass is the Reed Solomon (RS) codes. Reed Solomon codes have the ability to correct random and burst errors. It is well known that an (n,k) RS code can correct up to (n-k)/2 random errors. When burst errors are involved, the error correcting ability of the RS code can be increased beyond (n-k)/2. It has previously been show that RS codes can reliably correct burst errors of length greater than (n-k)/2. In this paper, a new decoding algorithm is given which can also correct a burst error of length greater than (n-k)/2.

  11. A new description of combined trellis coding with asymmetric modulation

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1985-01-01

    The combination of rate k/(k+t) trellis codes with digital modulations described by an asymmetric 2 sup k+1-point signal constellation has been recently shown to yield performance improvement over the traditional symmetric constellation combined with the same trellis code. The approach taken is to specify an underlying trellis code and then map the output code symbols into the fixed signal constellation based on a rule called mapping by set partitioning. The latter process is tantamount to assigning signals from the constellation to the trellis code transitions so as to maximize the free Euclidean distance of the code. Recently, a new description of trellis codes has been given that combines the above two steps into one. The ideas introduced are further explored, placing particular emphasis on the optimization of the signal constellation asymmetry. It can be concluded that the trellis-coded amplitude modulation (AM) designs given are very close to being optimum.

  12. A novel RS BTC coding scheme for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Jia, Yue-xing; Hu, Yun-xia

    2012-07-01

    A novel Reed Solomon (RS) block turbo code (BTC) coding scheme of RS(63,58)×RS(63,58) for optical communications is proposed. The simulation results show that the net coding gain (NCG) of this scheme at the sixth iteration is more than that of other coding schemes at the third iteration for the bit error rate (BER) of 10-12. Furthermore, the novel RS BTC has shorter component code and rapider encoding and decoding speed. Therefore, the novel RS BTC coding scheme can be better used in high-speed long-haul optical communication systems, and the novel RS BTC can be regarded as a candidate code of the super forward error correction (super-FEC) code. Moreover, the encoding/decoding design and implementation of the novel RS BTC are also presented

  13. The Problem of Evolving a Genetic Code

    ERIC Educational Resources Information Center

    Woese, Carl R.

    1970-01-01

    Proposes models for the evolution of the genetic code and translation mechanisms. Suggests that the translation process is so complex and precise that it must have evolved in many stages, and that the evolution of the code was influenced by the constraints imposed by the evolving translation mechanism. (EB)

  14. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  15. Performance results for a hybrid coding system.

    NASA Technical Reports Server (NTRS)

    Hoffman, L. B.

    1971-01-01

    Results of computer simulation studies of the hybrid pull-up bootstrap decoding algorithm, using a constraint length 24, nonsystematic, rate 1/2 convolutional code for the symmetric channel with both binary and eight-level quantized outputs. Computational performance was used to measure the effect of several decoder parameters and determine practical operating constraints. Results reveal that the track length may be reduced to 500 information bits with small degradation in performance. The optimum number of tracks per block was found to be in the range from 7 to 11. An effective technique was devised to efficiently allocate computational effort and identify reliably decoded data sections. Long simulations indicate that a practical bootstrap decoding configuration has a computational performance about 1.0 dB better than sequential decoding and an output bit error rate about .0000025 near the R sub comp point.

  16. Standardized pill imprint codes: a pharma fantasy.

    PubMed

    Schiff, Gordon

    2004-02-01

    To safely use medications, professionals and consumers need usable and reliable methods to identify tablets patients are prescribed and taking. Currently, each manufacturer assigns its own identifying codes and symbols. Standardization of the system for identifying solid dosage forms is a goal that has been widely advocated, yet stubbornly resistant to progress. Physicians, pharmacists, and consumers attempting to identify pills must use various methods which have shortcomings in ease of use, availability, and accuracy. Arguments have been advanced, particularly by pharmaceutical manufacturers, that evidence of unworkability of the current system is not compelling, and costs of retooling current manufacturing processes could be prohibitive. These issues are currently being explored by a task force led by the U.S. Pharmacopeia Safe Medication Use, and Pharmaceutical Forms Dosage Expert Committees. This paper presents a fictitious case study of an elderly patient succumbing to digoxin overdose illustrating the dilemmas posed in the tablet-imprint debate. PMID:15171065

  17. Unravelling a histone code for malaria virulence.

    PubMed

    Comeaux, Christy A; Duraisingh, Manoj T

    2007-12-01

    Epigenetic phenomena have been shown to play a role in the regulated expression of virulence genes in several pathogenic organisms, including the var gene family in Plasmodium falciparum. A better understanding of how P. falciparum can both maintain a single active var gene locus through many erythrocytic cycles and also achieve successive switching to different loci in order to evade the host immune system is greatly needed. Disruption of this tightly co-ordinated expression system presents an opportunity for increased clearance of the parasites by the immune system and, in turn, reduced mortality and morbidity. In the current issue of Molecular Microbiology, Lopez-Rubio and colleagues investigate the correlation of specific post-translational histone modifications with different transcriptional states of a single var gene, var2csa. Quantitative chromatin immunoprecipitation is used to demonstrate that different histone methylation marks are enriched at the 5' flanking and coding regions of active, poised or silenced var genes. They identify an increase of H3K4me2 and H3K4me3 in the 5' flanking region of an active var locus and expand on an earlier finding that H3K9me3 is enriched in the coding regions of silenced var genes. The authors also present evidence that H3K4me2 bookmarks the active var gene locus during later developmental stages for expression in the subsequent asexual cycle, hinting at a potential mechanism for transcriptional 'memory'. The stage is now set for work generating a complete catalogue of all histone modifications associated with var gene regulation as well as functional studies striving to uncover the precise mechanisms underlying these observations. PMID:18028316

  18. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  19. Deciphering a neural code for vision.

    PubMed

    Passaglia, C; Dodge, F; Herzog, E; Jackson, S; Barlow, R

    1997-11-11

    Deciphering the information that eyes, ears, and other sensory organs transmit to the brain is important for understanding the neural basis of behavior. Recordings from single sensory nerve cells have yielded useful insights, but single neurons generally do not mediate behavior; networks of neurons do. Monitoring the activity of all cells in a neural network of a behaving animal, however, is not yet possible. Taking an alternative approach, we used a realistic cell-based model to compute the ensemble of neural activity generated by one sensory organ, the lateral eye of the horseshoe crab, Limulus polyphemus. We studied how the neural network of this eye encodes natural scenes by presenting to the model movies recorded with a video camera mounted above the eye of an animal that was exploring its underwater habitat. Model predictions were confirmed by simultaneously recording responses from single optic nerve fibers of the same animal. We report here that the eye transmits to the brain robust "neural images" of objects having the size, contrast, and motion of potential mates. The neural code for such objects is not found in ambiguous messages of individual optic nerve fibers but rather in patterns of coherent activity that extend over small ensembles of nerve fibers and are bound together by stimulus motion. Integrative properties of neurons in the first synaptic layer of the brain appear well suited to detecting the patterns of coherent activity. Neural coding by this relatively simple eye helps explain how horseshoe crabs find mates and may lead to a better understanding of how more complex sensory organs process information. PMID:9356504

  20. HIDUTYDRV Code, A Fuel Product Margin Tool

    SciTech Connect

    Krammen, Michael A.; Karoutas, Zeses E.; Grill, Steven F.; Sutharshan, Balendra

    2007-07-01

    HIDUTYDRV is a computer code currently used in core design to model the best estimate steady-state fuel rod corrosion performance for Westinghouse's CE-design 14x14 and 16x16 fuel. The fuel rod oxide thickness, sub-cooled nucleate boiling (referred to as mass evaporation or steaming), and fuel duty indices can be predicted for individual rods or up to every fuel rod in the quarter core at every nuclear fuel management depletion time-step as a function of axial elevation within the core. Best estimate operating margins for fuel components whose performance depends on the local power and thermal hydraulic conditions are candidates for analysis with HIDUTYDRV. HIDUTYDRV development will focus on fuel component parameters associated with known leakers for addressing INPO goals to eliminate fuel leakers by 2010. (authors)

  1. Toward a Code of Conduct for Graduate Education

    ERIC Educational Resources Information Center

    Proper, Eve

    2012-01-01

    Most academic disciplines promulgate codes of ethics that serve as public statements of professional norms of their membership. These codes serve both symbolic and practical purposes, stating to both members and the larger public what a discipline's highest ethics are. This article explores what scholarly society codes of ethics could say about…

  2. A Code for Probabilistic Safety Assessment

    SciTech Connect

    1997-10-10

    An integrated fault-event tree software package PSAPACK was developed for level-1 PSA using personal computers. It is a menu driven interactive modular system which permits different choices, depending on the user's purposes and needs. The event tree development module is capable of developing the logic accident sequences based on the user's specified relations between event tree headings. Identification of success sequences and core damage sequences is done automatically by the code based on the success function input by the user. It links minimum cut sets (MCS) from system fault trees and performs the Boolean reduction. It can also retrieve data from the reliability data base to perform the quantification of accident sequences.

  3. A Code for Probabilistic Safety Assessment

    Energy Science and Technology Software Center (ESTSC)

    1997-10-10

    An integrated fault-event tree software package PSAPACK was developed for level-1 PSA using personal computers. It is a menu driven interactive modular system which permits different choices, depending on the user's purposes and needs. The event tree development module is capable of developing the logic accident sequences based on the user's specified relations between event tree headings. Identification of success sequences and core damage sequences is done automatically by the code based on the successmore » function input by the user. It links minimum cut sets (MCS) from system fault trees and performs the Boolean reduction. It can also retrieve data from the reliability data base to perform the quantification of accident sequences.« less

  4. A Proposed Code Of Ethics For Infrared Thermographic Professionals

    NASA Astrophysics Data System (ADS)

    Roberts, Charles C.

    1987-05-01

    The American Heritage Dictionary defines ethics as "The general study of morals and of specific moral choices to be made by the individual in his relationship with others". A code of ethics defines these moral relationships to encourage integrity throughout a profession. A defined code of ethics often yields credibility to an organization or association of professionals. This paper outlines a proposed code of ethics for practitioners in the infrared thermographic field. The proposed code covers relationships with the public, clients, other professionals and employers. The proposed code covers credentials, capabilities, thermograms, compensation and safety.

  5. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1992-01-01

    The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.

  6. Benchmark study between FIDAP and a cellular automata code

    SciTech Connect

    Akau, R.L.; Stockman, H.W.

    1991-01-01

    A fluid flow benchmark exercise was conducted to compare results between a cellular automata code and FIDAP. Cellular automata codes are free from gridding constraints, and are generally used to model slow (Reynolds number {approx} 1) flows around complex solid obstacles. However, the accuracy of cellular automata codes at higher Reynolds numbers, where inertial terms are significant, is not well-documented. In order to validate the cellular automata code, two fluids problems were investigated. For both problems, flow was assumed to be laminar, two-dimensional, isothermal, incompressible and periodic. Results showed that the cellular automata code simulated the overall behavior of the flow field. 7 refs., 12 figs.

  7. A grouped binary time code for telemetry and space applications

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    A computer oriented time code designed for users with various time resolution requirements is presented. It is intended as a time code for spacecraft and ground applications where direct code compatibility with automatic data processing equipment is of primary consideration. The principal features of this time code are: byte oriented format, selectable resolution options (from seconds to nanoseconds); and long ambiguity period. The time code is compatible with the new data handling and management concepts such as the NASA End-to-End Data System and the Telemetry Data Packetization format.

  8. A concatenated coded modulation scheme for error control (addition 2)

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1988-01-01

    A concatenated coded modulation scheme for error control in data communications is described. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and a relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is particularly effective for high-speed satellite communications for large file transfer where high reliability is required. This paper also presents a simple method for constructing block codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft-decision Viterbi decoding algorithm. Furthermore, some of these codes are phase invariant under multiples of 45 deg rotation.

  9. A concatenated coded modulation scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1988-01-01

    A concatenated coded modulation scheme for error control in data communications is presented. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and a relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is very effective for high speed satellite communications for large file transfer where high reliability is required. A simple method is also presented for constructing codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft decision Viterbi decoding algorithm. Furthermore, some of these codes are phase invariant under multiples of 45 deg rotation.

  10. Circular code motifs in transfer and 16S ribosomal RNAs: a possible translation code in genes.

    PubMed

    Michel, Christian J

    2012-04-01

    In 1996, a common trinucleotide circular code, called X, is identified in genes of eukaryotes and prokaryotes (Arquès and Michel, 1996). This circular code X is a set of 20 trinucleotides allowing the reading frames in genes to be retrieved locally, i.e. anywhere in genes and in particular without start codons. This reading frame retrieval needs a window length l of 12 nucleotides (l ≥ 12). With a window length strictly less than 12 nucleotides (l < 12), some words of X, called ambiguous words, are found in the shifted frames (the reading frame shifted by one or two nucleotides) preventing the reading frame in genes to be retrieved. Since 1996, these ambiguous words of X were never studied. In the first part of this paper, we identify all the ambiguous words of the common trinucleotide circular code X. With a length l varying from 1 to 11 nucleotides, the type and the occurrence number (multiplicity) of ambiguous words of X are given in each shifted frame. Maximal ambiguous words of X, words which are not factors of another ambiguous words, are also determined. Two probability definitions based on these results show that the common trinucleotide circular code X retrieves the reading frame in genes with a probability of about 90% with a window length of 6 nucleotides, and a probability of 99.9% with a window length of 9 nucleotides (100% with a window length of 12 nucleotides, by definition of a circular code). In the second part of this paper, we identify X circular code motifs (shortly X motifs) in transfer RNA and 16S ribosomal RNA: a tRNA X motif of 26 nucleotides including the anticodon stem-loop and seven 16S rRNA X motifs of length greater or equal to 15 nucleotides. Window lengths of reading frame retrieval with each trinucleotide of these X motifs are also determined. Thanks to the crystal structure 3I8G (Jenner et al., 2010), a 3D visualization of X motifs in the ribosome shows several spatial configurations involving mRNA X motifs, A-tRNA and E-tRNA X

  11. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    SciTech Connect

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  12. An Improved Canine Genome and a Comprehensive Catalogue of Coding Genes and Non-Coding Transcripts

    PubMed Central

    Hoeppner, Marc P.; Lundquist, Andrew; Pirun, Mono; Meadows, Jennifer R. S.; Zamani, Neda; Johnson, Jeremy; Sundström, Görel; Cook, April; FitzGerald, Michael G.; Swofford, Ross; Mauceli, Evan; Moghadam, Behrooz Torabi; Greka, Anna; Alföldi, Jessica; Abouelleil, Amr; Aftuck, Lynne; Bessette, Daniel; Berlin, Aaron; Brown, Adam; Gearin, Gary; Lui, Annie; Macdonald, J. Pendexter; Priest, Margaret; Shea, Terrance; Turner-Maier, Jason; Zimmer, Andrew; Lander, Eric S.; di Palma, Federica

    2014-01-01

    The domestic dog, Canis familiaris, is a well-established model system for mapping trait and disease loci. While the original draft sequence was of good quality, gaps were abundant particularly in promoter regions of the genome, negatively impacting the annotation and study of candidate genes. Here, we present an improved genome build, canFam3.1, which includes 85 MB of novel sequence and now covers 99.8% of the euchromatic portion of the genome. We also present multiple RNA-Sequencing data sets from 10 different canine tissues to catalog ∼175,000 expressed loci. While about 90% of the coding genes previously annotated by EnsEMBL have measurable expression in at least one sample, the number of transcript isoforms detected by our data expands the EnsEMBL annotations by a factor of four. Syntenic comparison with the human genome revealed an additional ∼3,000 loci that are characterized as protein coding in human and were also expressed in the dog, suggesting that those were previously not annotated in the EnsEMBL canine gene set. In addition to ∼20,700 high-confidence protein coding loci, we found ∼4,600 antisense transcripts overlapping exons of protein coding genes, ∼7,200 intergenic multi-exon transcripts without coding potential, likely candidates for long intergenic non-coding RNAs (lincRNAs) and ∼11,000 transcripts were reported by two different library construction methods but did not fit any of the above categories. Of the lincRNAs, about 6,000 have no annotated orthologs in human or mouse. Functional analysis of two novel transcripts with shRNA in a mouse kidney cell line altered cell morphology and motility. All in all, we provide a much-improved annotation of the canine genome and suggest regulatory functions for several of the novel non-coding transcripts. PMID:24625832

  13. A code generation framework for the ALMA common software

    NASA Astrophysics Data System (ADS)

    Troncoso, Nicolás; von Brand, Horst H.; Ibsen, Jorge; Mora, Matias; Gonzalez, Victor; Chiozzi, Gianluca; Jeram, Bogdan; Sommer, Heiko; Zamora, Gabriel; Tejeda, Alexis

    2010-07-01

    Code generation helps in smoothing the learning curve of a complex application framework and in reducing the number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model Driven code generation to transform directly an UML Model into a full implementation in the ACS framework. This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are: software reuse, implicit application of design patterns and automatic tests generation. A model driven approach to design makes it also possible using the same model with different frameworks, by generating for different targets. The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator. OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct mapping of data types, the main difficulty encountered in the translation process. The output is an ACS application readily usable by the developer, including the necessary deployment configuration, thus minimizing any configuration burden during testing. The specific application code is implemented by extending generated classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process and aiding the developers by keeping a clean logical separation between the two. Our first results show that code generation improves dramatically the code productivity.

  14. A code inspection process for security reviews

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele

    2010-04-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  15. A code inspection process for security reviews

    SciTech Connect

    Garzoglio, Gabriele; /Fermilab

    2009-05-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  16. A concatenated coded modulation scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, Tadao; Lin, Shu

    1988-01-01

    A concatenated coded modulation scheme for error control in data communications is presented. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is particularly effective for high speed satellite communication for large file transfer where high reliability is required. Also presented is a simple method for constructing block codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft decision Viterbi decoding algorithm.

  17. A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Su, T. Y.; Kao, T. J.

    1991-01-01

    This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.

  18. Python interface generator for Fortran based codes (a code development aid)

    Energy Science and Technology Software Center (ESTSC)

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling codemore » can be written in the much more versatile Python language.« less

  19. A-to-I editing of coding and non-coding RNAs by ADARs

    PubMed Central

    Nishikura, Kazuko

    2016-01-01

    Adenosine deaminases acting on RNA (ADARs) convert adenosine to inosine in double-stranded RNA. This A-to-I editing occurs not only in protein-coding regions of mRNAs, but also frequently in non-coding regions that contain inverted Alu repeats. Editing of coding sequences can result in the expression of functionally altered proteins that are not encoded in the genome, whereas the significance of Alu editing remains largely unknown. Certain microRNA (miRNA) precursors are also edited, leading to reduced expression or altered function of mature miRNAs. Conversely, recent studies indicate that ADAR1 forms a complex with Dicer to promote miRNA processing, revealing a new function of ADAR1 in the regulation of RNA interference. PMID:26648264

  20. A Semantic Analysis Method for Scientific and Engineering Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  1. RAYS: a geometrical optics code for EBT

    SciTech Connect

    Batchelor, D.B.; Goldfinger, R.C.

    1982-04-01

    The theory, structure, and operation of the code are described. Mathematical details of equilibrium subroutiones for slab, bumpy torus, and tokamak plasma geometry are presented. Wave dispersion and absorption subroutines are presented for frequencies ranging from ion cyclotron frequency to electron cyclotron frequency. Graphics postprocessors for RAYS output data are also described.

  2. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  3. Arithmetic coding as a non-linear dynamical system

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Vaidya, Prabhakar G.; Bhat, Kishor G.

    2009-04-01

    In order to perform source coding (data compression), we treat messages emitted by independent and identically distributed sources as imprecise measurements (symbolic sequence) of a chaotic, ergodic, Lebesgue measure preserving, non-linear dynamical system known as Generalized Luröth Series (GLS). GLS achieves Shannon's entropy bound and turns out to be a generalization of arithmetic coding, a popular source coding algorithm, used in international compression standards such as JPEG2000 and H.264. We further generalize GLS to piecewise non-linear maps (Skewed-nGLS). We motivate the use of Skewed-nGLS as a framework for joint source coding and encryption.

  4. PLASIM: A computer code for simulating charge exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.

    1982-01-01

    The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.

  5. A low complexity prioritized bit-plane coding for SNR scalability in MPEG-21 scalable video coding

    NASA Astrophysics Data System (ADS)

    Peng, Wen-Hsiao; Chiang, Tihao; Hang, Hsueh-Ming

    2005-07-01

    In this paper, we propose a low complexity prioritized bit-plane coding scheme to improve the rate-distortion performance of cyclical block coding in MPEG-21 scalable video coding. Specifically, we use a block priority assignment algorithm to firstly transmit the symbols and the blocks with potentially better rate-distortion performance. Different blocks are allowed to be coded unequally in a coding cycle. To avoid transmitting priority overhead, the encoder and the decoder refer to the same context to assign priority. Furthermore, to reduce the complexity, the priority assignment is done by a look-up-table and the coding of each block is controlled by a simple threshold comparison mechanism. Experimental results show that our prioritized bit-plane coding scheme can offer up to 0.5dB PSNR improvement over the cyclical block coding described in the joint scalable verification model (JSVM).

  6. CALMAR: A New Versatile Code Library for Adjustment from Measurements

    NASA Astrophysics Data System (ADS)

    Grégoire, G.; Fausser, C.; Destouches, C.; Thiollay, N.

    2016-02-01

    CALMAR, a new library for adjustment has been developed. This code performs simultaneous shape and level adjustment of an initial prior spectrum from measured reactions rates of activation foils. It is written in C++ using the ROOT data analysis framework,with all linear algebra classes. STAYSL code has also been reimplemented in this library. Use of the code is very flexible : stand-alone, inside a C++ code, or driven by scripts. Validation and test cases are under progress. Theses cases will be included in the code package that will be available to the community. Future development are discussed. The code should support the new Generalized Nuclear Data (GND) format. This new format has many advantages compared to ENDF.

  7. Source Term Code Package: a user's guide (Mod 1)

    SciTech Connect

    Gieseke, J.A.; Cybulskis, P.; Jordan, H.; Lee, K.W.; Schumacher, P.M.; Curtis, L.A.; Wooton, R.O.; Quayle, S.F.; Kogan, V.

    1986-07-01

    As part of a major reassessment of the release of radioactive materials to the environment (source terms) in severe reactor accidents, a group of state-of-the-art computer codes was utilized to perform extensive analyses. A major product of this source term reassessment effort was a demonstrated methodology for analyzing specific accident situations to provide source term predictions. The computer codes forming this methodology have been upgraded and modified for release and further use. This system of codes has been named the Source Term Code Package (STCP) and is the subject of this user's guide. The guide is intended to provide an understanding of the STCP structure and to facilitate STCP use. The STCP was prepared for operation on a CDC system but is written in FORTRAN-77 to permit transportability. In the current version (Mod 1) of the STCP, the various calculational elements fall into four major categories represented by the codes MARCH3, TRAP-MELT3, VANESA, and NAUA/SPARC/ICEDF. The MARCH3 code is a combination of the MARCH2, CORSOR-M, and CORCON-Mod 2 codes. The TRAP-MELT3 code is a combination of the TRAP-MELT2.0 and MERGE codes.

  8. RESRAD-CHEM: A computer code for chemical risk assessment

    SciTech Connect

    Cheng, J.J.; Yu, C.; Hartmann, H.M.; Jones, L.G.; Biwer, B.M.; Dovel, E.S.

    1993-10-01

    RESRAD-CHEM is a computer code developed at Argonne National Laboratory for the U.S. Department of Energy to evaluate chemically contaminated sites. The code is designed to predict human health risks from multipathway exposure to hazardous chemicals and to derive cleanup criteria for chemically contaminated soils. The method used in RESRAD-CHEM is based on the pathway analysis method in the RESRAD code and follows the U.S. Environmental Protection Agency`s (EPA`s) guidance on chemical risk assessment. RESRAD-CHEM can be used to evaluate a chemically contaminated site and, in conjunction with the use of the RESRAD code, a mixed waste site.

  9. Documentation for RISKIN: A risk integration code for MACCS (MELCOR Accident Consequence Code System) output

    SciTech Connect

    Rollstin, J.A. ); Hong, Kou-John )

    1990-11-01

    This document has been prepared as a user's guide for the computer program RISKIN developed at Sandia National Laboratories. The RISKIN code generates integrated risk tables and the weighted mean risk associated with a user-selected set of consequences from up to five output files generated by the MELCOR Accident Consequence Code System (MACCS). Each MACCS output file can summarize the health and economic consequences resulting from up to 60 distinct severe accident source terms. Since the accident frequency associated with these source terms is not included as a MACCS input parameter a postprocessor is required to derived results that must incorporate accident frequency. The RISKIN code is such a postprocessor. RISKIN will search the MACCS output files for the mean and peak consequence values and the complementary cumulative distributive function (CCDF) tables for each requested consequence. Once obtained, RISKIN combines this data with accident frequency data to produce frequency weighted results. A postprocessor provides RISKIN an interface to the proprietary DISSPLA plot package. The RISKIN code has been written using ANSI Standard FORTRAN 77 to maximize its portability.

  10. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  11. Bar-Code System for a Microbiological Laboratory

    NASA Technical Reports Server (NTRS)

    Law, Jennifer; Kirschner, Larry

    2007-01-01

    A bar-code system has been assembled for a microbiological laboratory that must examine a large number of samples. The system includes a commercial bar-code reader, computer hardware and software components, plus custom-designed database software. The software generates a user-friendly, menu-driven interface.

  12. The Creation and Implementation of a Student Civility Code

    ERIC Educational Resources Information Center

    Lucas, John J.; Rolden-Scheib, Gloria

    2006-01-01

    This paper examines the design and implementation of a student civility code at a regional campus of a Big Ten University. The paper also provides some guidelines to address student incivility in both the classroom and service offices throughout a higher education institution. The communication of such a student code to promote civility was…

  13. Ethical codes for attorneys: a brief introduction.

    PubMed

    Zarkowski, P

    1997-01-01

    Ethical standards for lawyers are contained in the Model Rules of Professional Conduct (which lays out both "shall/shall not" rules and "may" suggestions in nine broad areas) and the Model Code of Professional Responsibility (which covers essentially the same topic areas but offers more detailed commentary). Topics included in the Rules are the client-lawyer relationship, the attorney's role as an advocate and counselor, law firms and associations, public service, transactions with individuals other than clients and information about legal services including advertising, firm names, and letterhead. The American Dental Association's Principles of Ethics and Code of Professional Conduct is organized around the five ethical principles of patient autonomy, nonmaleficence, beneficence, justice, and veracity. There are substantial similarities in intent between the ethical standards of dentists and lawyers; there are also differences. PMID:9270220

  14. A bandwidth efficient coding scheme for the Hubble Space Telescope

    NASA Astrophysics Data System (ADS)

    Pietrobon, Steven S.; Costello, Daniel J., Jr.

    1991-11-01

    As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.

  15. A bandwidth efficient coding scheme for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J., Jr.

    1991-01-01

    As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.

  16. CESAR: A Code for Nuclear Fuel and Waste Characterisation

    SciTech Connect

    Vidal, J.M.; Grouiller, J.P.; Launay, A.; Berthion, Y.; Marc, A.; Toubon, H.

    2006-07-01

    CESAR (Simplified Evolution Code Applied to Reprocessing) is a depletion code developed through a joint program between CEA and COGEMA. In the late 1980's, the first use of this code dealt with nuclear measurement at the Laboratories of the La Hague reprocessing plant. The use of CESAR was then extended to characterizations of all entrance materials and for characterisation, via tracer, of all produced waste. The code can distinguish more than 100 heavy nuclides, 200 fission products and 100 activation products, and it can characterise both the fuel and the structural material of the fuel. CESAR can also make depletion calculations from 3 months to 1 million years of cooling time. Between 2003-2005, the 5. version of the code was developed. The modifications were related to the harmonisation of the code's nuclear data with the JEF2.2 nuclear data file. This paper describes the code and explains the extensive use of this code at the La Hague reprocessing plant and also for prospective studies. The second part focuses on the modifications of the latest version, and describes the application field and the qualification of the code. Many companies and the IAEA use CESAR today. CESAR offers a Graphical User Interface, which is very user-friendly. (authors)

  17. Eighteen rules for writing a code of professional ethics.

    PubMed

    Davis, Michael

    2007-06-01

    Most professional societies, scientific associations, and the like that undertake to write a code of ethics do so using other codes as models but without much (practical) guidance about how to do the work. The existing literature on codes is much more concerned with content than procedure. This paper adds to guidance already in the literature what I learned from participating in the writing of an important code of ethics. The guidance is given in the form of "rules" each of which is explained and (insofar as possible) justified. The emphasis is on procedure. PMID:17717731

  18. A New Detailed Term Accounting Opacity Code: TOPAZ

    SciTech Connect

    Iglesias, C A; Chen, M H; Isaacs, W; Sonnad, V; Wilson, B G

    2004-04-28

    A new opacity code, TOPAZ, which explicitly includes configuration term structure in the bound-bound transitions is being developed. The goal is to extend the current capabilities of detailed term accounting opacity codes such as OPAL that are limited to lighter elements of astrophysical interest. At present, opacity calculations of heavier elements use statistical methods that rely on the presence of myriad spectral lines for accuracy. However, statistical approaches have been shown to be inadequate for astrophysical opacity calculations. An application of the TOPAZ code will be to study the limits of statistical methods. Comparisons of TOPAZ to other opacity codes as well as experiments are presented.

  19. IGB grid: User's manual (A turbomachinery grid generation code)

    NASA Technical Reports Server (NTRS)

    Beach, T. A.; Hoffman, G.

    1992-01-01

    A grid generation code called IGB is presented for use in computational investigations of turbomachinery flowfields. It contains a combination of algebraic and elliptic techniques coded for use on an interactive graphics workstation. The instructions for use and a test case are included.

  20. In search of a 2-dB coding gain

    NASA Technical Reports Server (NTRS)

    Yuen, J. H.; Vo, Q. D.

    1985-01-01

    A recent code search found a (15,1/5), a (14,1/6), and a (15,1/6) convolutional code which, when concatenated with a 10-bit (1023,959) Reed-Solomon (RS) code, achieves a bit-error rate (BER) of 0.000001 at a bit signal-to-noise ratio (SNR) of 0.50 dB, 0.47 dB and 0.42 B, respectively. All of these three codes outperform the Voyager communication system, our baseline, which achieves a BER of 10.000001 at bit SNR of 2.53 db, by more than 2 dB. The 2 dB coding improvement goal was exceeded.

  1. Coded source neutron imaging with a MURA mask

    NASA Astrophysics Data System (ADS)

    Zou, Y. B.; Schillinger, B.; Wang, S.; Zhang, X. S.; Guo, Z. Y.; Lu, Y. R.

    2011-09-01

    In coded source neutron imaging the single aperture commonly used in neutron radiography is replaced with a coded mask. Using a coded source can improve the neutron flux at the sample plane when a very high L/ D ratio is needed. The coded source imaging is a possible way to reduce the exposure time to get a neutron image with very high L/ D ratio. A 17×17 modified uniformly redundant array coded source was tested in this work. There are 144 holes of 0.8 mm diameter on the coded source. The neutron flux from the coded source is as high as from a single 9.6 mm aperture, while its effective L/ D is the same as in the case of a 0.8 mm aperture. The Richardson-Lucy maximum likelihood algorithm was used for image reconstruction. Compared to an in-line phase contrast neutron image taken with a 1 mm aperture, it takes much less time for the coded source to get an image of similar quality.

  2. Roadmap Toward a Predictive Performance-based Commercial Energy Code

    SciTech Connect

    Rosenberg, Michael I.; Hart, Philip R.

    2014-10-01

    Energy codes have provided significant increases in building efficiency over the last 38 years, since the first national energy model code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, and the inability to handle control optimization that is specific to building type and use. This paper provides a high level review of different options for energy codes, including prescriptive, prescriptive packages, EUI Target, outcome-based, and predictive performance approaches. This paper also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria. A vision is outlined to serve as a roadmap for future commercial code development. That vision is based on code development being led by a specific approach to predictive energy performance combined with building specific prescriptive packages that are designed to be both cost-effective and to achieve a desired level of performance. Compliance with this new approach can be achieved by either meeting the performance target as demonstrated by whole building energy modeling, or by choosing one of the prescriptive packages.

  3. A Program Evaluation of Classroom Data Collection with Bar Codes.

    ERIC Educational Resources Information Center

    Saunders, Muriel D.; And Others

    1993-01-01

    A special education record-keeping system using bar code symbols and optical scanners is described. Bar code symbols were created for each Individualized Educational Plan objective, and symbols are scanned when students emit targeted behaviors. A weekly printed report of student performance is produced. Advantages, disadvantages, and costs are…

  4. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....C. 552(a) and 1 CFR part 51. Copies of the ASME Boiler and Pressure Vessel Code, the ASME Code for....gov/federal-register/cfr/ibr-locations.html. (1) As used in this section, references to Section III... accordance with 10 CFR part 50, Appendix J, Option A or Option B on which the applicant's or...

  5. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....C. 552(a) and 1 CFR part 51. Copies of the ASME Boiler and Pressure Vessel Code, the ASME Code for....gov/federal-register/cfr/ibr-locations.html. (1) As used in this section, references to Section III... accordance with 10 CFR part 50, Appendix J, Option A or Option B on which the applicant's or...

  6. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....C. 552(a) and 1 CFR part 51. Copies of the ASME Boiler and Pressure Vessel Code, the ASME Code for.../federal-register/cfr/ibr-locations.html. (1) As used in this section, references to Section III refer to... accordance with 10 CFR part 50, Appendix J, Option A or Option B on which the applicant's or...

  7. Rationale for Student Dress Codes: A Review of School Handbooks

    ERIC Educational Resources Information Center

    Freeburg, Elizabeth W.; Workman, Jane E.; Lentz-Hees, Elizabeth S.

    2004-01-01

    Through dress codes, schools establish rules governing student appearance. This study examined stated rationales for dress and appearance codes in secondary school handbooks; 182 handbooks were received. Of 150 handbooks containing a rationale, 117 related dress and appearance regulations to students' right to a non-disruptive educational…

  8. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  9. Code-Mixing as a Bilingual Instructional Strategy

    ERIC Educational Resources Information Center

    Jiang, Yih-Lin Belinda; García, Georgia Earnest; Willis, Arlette Ingram

    2014-01-01

    This study investigated code-mixing practices, specifically the use of L2 (English) in an L1 (Chinese) class in a U.S. bilingual program. Our findings indicate that the code-mixing practices made and prompted by the teacher served five pedagogical functions: (a) to enhance students' bilingualism and bilingual learning, (b) to review and…

  10. The RCVS codes of conduct: what's in a word?

    PubMed

    McCulloch, Steven; Reiss, Michael; Jinman, Peter; Wathes, Christopher

    2014-01-18

    In 2012, the RCVS introduced a new Code of Professional Conduct for Veterinary Surgeons, replacing the Guide to Professional Conduct which had existed until then. Is a common Code relevant for the veterinarian's many roles? There's more to think about here than just the change of name, write Steven McCulloch, Michael Reiss, Peter Jinman and Christopher Wathes. PMID:24443467

  11. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  12. A novel super-FEC code based on concatenated code for high-speed long-haul optical communication systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jianguo; Ye, Wenwei; Jiang, Ze; Mao, Youju; Wang, Wei

    2007-05-01

    The structures of the novel super forward error correction (Super-FEC) code type based on the concatenated code for high-speed long-haul optical communication systems are studied in this paper. The Reed-Solomon (RS) (255, 239) + Bose-Chaudhuri-Hocguenghem (BCH) (1023, 963) concatenated code is presented after the characteristics of the concatenated code and the two Super-FEC code type presented in ITU-T G.975.1 have theoretically been analyzed, the simulation result shows that this novel code type, compared with the RS (255, 239) + convolutional-self-orthogonal-code (CSOC) ( k0/ n0 = 6/7, J = 8) code in ITU-T G.975.1, has a lower redundancy and better error-correction capabilities, and its net coding gain (NCG) at the third iteration is 0.57 dB more than that of RS (255, 239) + CSOC ( k0/ n0 = 6/7, J = 8) code in ITU-T G.975.1 at the third iteration for the bit error rate (BER) of 10 -12. Therefore, the novel code type can better be used in long-haul, larger capacity and higher bit-rate optical communication systems. Furthermore, the design and implementation of the novel concatenated code type are also discussed.

  13. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools. PMID:25910273

  14. Toward a Code of Conduct for the Presidency

    ERIC Educational Resources Information Center

    Fleming, J. Christopher

    2012-01-01

    A presidential code of conduct is needed more today than ever before. College and university presidents are being required to do more without the proper training to succeed. Presidents from outside the academy enter academia with normative patterns and codes of conduct that served them well in their previous occupations but now have the potential…

  15. Coding as a Trojan Horse for Mathematics Education Reform

    ERIC Educational Resources Information Center

    Gadanidis, George

    2015-01-01

    The history of mathematics educational reform is replete with innovations taken up enthusiastically by early adopters without significant transfer to other classrooms. This paper explores the coupling of coding and mathematics education to create the possibility that coding may serve as a Trojan Horse for mathematics education reform. That is,…

  16. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  17. A novel unified coding analytical method for Internet of Things

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Zhang, JianHong

    2013-08-01

    This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.

  18. Error-correcting code on a cactus: A solvable model

    NASA Astrophysics Data System (ADS)

    Vicente, R.; Saad, D.; Kabashima, Y.

    2000-09-01

    An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.

  19. Coded aperture imaging with a HURA coded aperture and a discrete pixel detector

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    An investigation into the gamma ray imaging properties of a hexagonal uniformly redundant array (HURA) coded aperture and a detector consisting of discrete pixels constituted the major research effort. Such a system offers distinct advantages for the development of advanced gamma ray astronomical telescopes in terms of the provision of high quality sky images in conjunction with an imager plane which has the capacity to reject background noise efficiently. Much of the research was performed as part of the European Space Agency (ESA) sponsored study into a prospective space astronomy mission, GRASP. The effort involved both computer simulations and a series of laboratory test images. A detailed analysis of the system point spread function (SPSF) of imaging planes which incorporate discrete pixel arrays is presented and the imaging quality quantified in terms of the signal to noise ratio (SNR). Computer simulations of weak point sources in the presence of detector background noise were also investigated. Theories developed during the study were evaluated by a series of experimental measurements with a Co-57 gamma ray point source, an Anger camera detector, and a rotating HURA mask. These tests were complemented by computer simulations designed to reproduce, as close as possible, the experimental conditions. The 60 degree antisymmetry property of HURA's was also employed to remove noise due to detector systematic effects present in the experimental images, and rendered a more realistic comparison of the laboratory tests with the computer simulations. Plateau removal and weighted deconvolution techniques were also investigated as methods for the reduction of the coding error noise associated with the gamma ray images.

  20. Selective video encryption of a distributed coded bitstream using LDPC codes

    NASA Astrophysics Data System (ADS)

    Um, Hwayoung; Delp, Edward J.

    2006-02-01

    Selective encryption is a technique that is used to minimizec omputational complexity or enable system functionality by only encrypting a portion of a compressed bitstream while still achieving reasonable security. For selective encryption to work, we need to rely not only on the beneficial effects of redundancy reduction, but also on the characteristics of the compression algorithm to concentrate important data representing the source in a relatively small fraction of the compressed bitstream. These important elements of the compressed data become candidates for selective encryption. In this paper, we combine encryption and distributed video source coding to consider the choices of which types of bits are most effective for selective encryption of a video sequence that has been compressed using a distributed source coding method based on LDPC codes. Instead of encrypting the entire video stream bit by bit, we encrypt only the highly sensitive bits. By combining the compression and encryption tasks and thus reducing the number of bits encrypted, we can achieve a reduction in system complexity.

  1. A Simple Cooperative Relaying with Alamouti Coded Transmission

    NASA Astrophysics Data System (ADS)

    Yamaoka, Tomoya; Hara, Yoshitaka; Fukui, Noriyuki; Kubo, Hiroshi; Yamazato, Takaya

    Cooperative diversity using space-time codes offers effective space diversity with low complexity, but the scheme needs the space-time coding process in the relay nodes. We propose a simple cooperative relay scheme that uses space-time coding. In the scheme, the source node transmits the Alamouti coded signal sequences and the sink node receives the signal sequence via the two coordinated relay nodes. At the relay nodes, the operation procedure is just permutation and forwarding of the signal sequence. In the proposed scheme, none of the relay nodes need quadrature detection and space-time coding and the simple relay process offers effective space diversity. Moreover, simulations show the effectiveness of the proposed relay process by some simulations.

  2. A new two dimensional spectral/spatial multi-diagonal code for noncoherent optical code division multiple access (OCDMA) systems

    NASA Astrophysics Data System (ADS)

    Kadhim, Rasim Azeez; Fadhil, Hilal Adnan; Aljunid, S. A.; Razalli, Mohamad Shahrazel

    2014-10-01

    A new two dimensional codes family, namely two dimensional multi-diagonal (2D-MD) codes, is proposed for spectral/spatial non-coherent OCDMA systems based on the one dimensional MD code. Since the MD code has the property of zero cross correlation, the proposed 2D-MD code also has this property. So that, the multi-access interference (MAI) is fully eliminated and the phase induced intensity noise (PIIN) is suppressed with the proposed code. Code performance is analyzed in terms of bit error rate (BER) while considering the effect of shot noise, PIIN, and thermal noise. The performance of the proposed code is compared with the related MD, modified quadratic congruence (MQC), two dimensional perfect difference (2D-PD) and two dimensional diluted perfect difference (2D-DPD) codes. The analytical and the simulation results reveal that the proposed 2D-MD code outperforms the other codes. Moreover, a large number of simultaneous users can be accommodated at low BER and high data rate.

  3. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    NASA Technical Reports Server (NTRS)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  4. A New AMR Code for Relativistic Magnetohydrodynamics in Dynamical Specetimes: Numerical Method and Code Validation

    NASA Astrophysics Data System (ADS)

    Liu, Yuk Tung; Etienne, Zachariah; Shapiro, Stuart

    2011-04-01

    The Illinois relativity group has written and tested a new GRMHD code, which is compatible with adaptive-mesh refinement (AMR) provided by the widely-used Cactus/Carpet infrastructure. Our code solves the Einstein-Maxwell-MHD system of coupled equations in full 3+1 dimensions, evolving the metric via the BSSN formalism and the MHD and magnetic induction equations via a conservative, high-resolution shock-capturing scheme. The induction equations are recast as an evolution equation for the magnetic vector potential. The divergenceless constraint div(B) = 0 is enforced by the curl of the vector potential. In simulations with uniform grid spacing, our MHD scheme is numerically equivalent to a commonly used, staggered-mesh constrained-transport scheme. We will present numerical method and code validation tests for both Minkowski and curved spacetimes. The tests include magnetized shocks, nonlinear Alfven waves, cylindrical explosions, cylindrical rotating disks, magnetized Bondi tests, and the collapse of a magnetized rotating star. Some of the more stringent tests involve black holes. We find good agreement between analytic and numerical solutions in these tests, and achieve convergence at the expected order.

  5. Electric utility value determination for wind energy. Volume II. A user's guide. [WTP code; WEIBUL code; ROSEN code; ULMOD code; FINAM code

    SciTech Connect

    Percival, David; Harper, James

    1981-02-01

    This report describes a method for determining the value of wind energy systems to electric utilities. It is performed by a package of computer models available from SERI that can be used with most utility planning models. The final output of these models gives a financial value ($/kW) of the wind energy system under consideration in the specific utility system. This volume, the second of two volumes, is a user's guide for the computer programs available from SERI. The first volume describes the value determination methodology and gives detailed discussion on each step of the computer modeling.

  6. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  7. A Two-Dimensional Compressible Gas Flow Code

    Energy Science and Technology Software Center (ESTSC)

    1995-03-17

    F2D is a general purpose, two dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-correction solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicabilitymore » of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.« less

  8. Implementation of a Blowing Boundary Condition in the LAURA Code

    NASA Technical Reports Server (NTRS)

    Thompson, Richard a.; Gnoffo, Peter A.

    2008-01-01

    Preliminary steps toward modeling a coupled ablation problem using a finite-volume Navier-Stokes code (LAURA) are presented in this paper. Implementation of a surface boundary condition with mass transfer (blowing) is described followed by verification and validation through comparisons with analytic results and experimental data. Application of the code to a carbon-nosetip ablation problem is demonstrated and the results are compared with previously published data. It is concluded that the code and coupled procedure are suitable to support further ablation analyses and studies.

  9. The Numerical Electromagnetics Code (NEC) - A Brief History

    SciTech Connect

    Burke, G J; Miller, E K; Poggio, A J

    2004-01-20

    The Numerical Electromagnetics Code, NEC as it is commonly known, continues to be one of the more widely used antenna modeling codes in existence. With several versions in use that reflect different levels of capability and availability, there are now 450 copies of NEC4 and 250 copies of NEC3 that have been distributed by Lawrence Livermore National Laboratory to a limited class of qualified recipients, and several hundred copies of NEC2 that had a recorded distribution by LLNL. These numbers do not account for numerous copies (perhaps 1000s) that were acquired through other means capitalizing on the open source code, the absence of distribution controls prior to NEC3 and the availability of versions on the Internet. In this paper we briefly review the history of the code that is concisely displayed in Figure 1. We will show how it capitalized on the research of prominent contributors in the early days of computational electromagnetics, how a combination of events led to the tri-service-supported code development program that ultimately led to NEC and how it evolved to the present day product. The authors apologize that space limitations do not allow us to provide a list of references or to acknowledge the numerous contributors to the code both of which can be found in the code documents.

  10. SEQassembly: A Practical Tools Program for Coding Sequences Splicing

    NASA Astrophysics Data System (ADS)

    Lee, Hongbin; Yang, Hang; Fu, Lei; Qin, Long; Li, Huili; He, Feng; Wang, Bo; Wu, Xiaoming

    CDS (Coding Sequences) is a portion of mRNA sequences, which are composed by a number of exon sequence segments. The construction of CDS sequence is important for profound genetic analysis such as genotyping. A program in MATLAB environment is presented, which can process batch of samples sequences into code segments under the guide of reference exon models, and splice these code segments of same sample source into CDS according to the exon order in queue file. This program is useful in transcriptional polymorphism detection and gene function study.

  11. A Combinatorial Geometry Code System with Model Testing Routines.

    Energy Science and Technology Software Center (ESTSC)

    1982-10-08

    GIFT, Geometric Information For Targets code system, is used to mathematically describe the geometry of a three-dimensional vehicle such as a tank, truck, or helicopter. The geometric data generated is merged in vulnerability computer codes with the energy effects data of a selected @munition to simulate the probabilities of malfunction or destruction of components when it is attacked by the selected munition. GIFT options include those which graphically display the vehicle, those which check themore » correctness of the geometry data, those which compute physical characteristics of the vehicle, and those which generate the geometry data used by vulnerability codes.« less

  12. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  13. ALEPH2 - A general purpose Monte Carlo depletion code

    SciTech Connect

    Stankovskiy, A.; Van Den Eynde, G.; Baeten, P.; Trakas, C.; Demy, P. M.; Villatte, L.

    2012-07-01

    The Monte-Carlo burn-up code ALEPH is being developed at SCK-CEN since 2004. A previous version of the code implemented the coupling between the Monte Carlo transport (any version of MCNP or MCNPX) and the ' deterministic' depletion code ORIGEN-2.2 but had important deficiencies in nuclear data treatment and limitations inherent to ORIGEN-2.2. A new version of the code, ALEPH2, has several unique features making it outstanding among other depletion codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. The last generation general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII and JENDL-4) are fully implemented, including special purpose activation, spontaneous fission, fission product yield and radioactive decay data. The built-in depletion algorithm allows to eliminate the uncertainties associated with obtaining the time-dependent nuclide concentrations. A predictor-corrector mechanism, calculation of nuclear heating, calculation of decay heat, decay neutron sources are available as well. The validation of the code on the results of REBUS experimental program has been performed. The ALEPH2 has shown better agreement with measured data than other depletion codes. (authors)

  14. A preprocessor for FORTRAN source code produced by reduce

    NASA Astrophysics Data System (ADS)

    Kaneko, Toshiaki; Kawabata, Setsuya

    1989-09-01

    For Estimating total cross sections and various spectra for complicated processes in high energy physics, the most time consuming part is numerical integration over the phase volume. When a FORTRAN source code for the integrand is produced by REDUCE, often it is not only too long but is not enough reduced to be optimized by a FORTRAN compiler. A program package called SPROC has been developed to convert FORTRAN source code to a more optimized form and to divide the code into subroutines whose lengths are short enough for FORTRAN compilers. It can also generate a vectorizable code, which can achieve high efficiency of vector computers. The output is given in a suitable form for the numerical integration package BASES and its vector computer version VBASES. By this improvement the CPU-time for integration is shortened by a factor of about two on a scalar computer and of several times then on a vector computer.

  15. A Coding System for Analysing a Spoken Text Database.

    ERIC Educational Resources Information Center

    Cutting, Joan

    1994-01-01

    This paper describes a coding system devised to analyze conversations of graduate students in applied linguistics at Edinburgh University. The system was devised to test the hypothesis that as shared knowledge among conversation participants grows, the textual density of in-group members has more cues than that of strangers. The informal…

  16. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  17. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  18. Shot level parallelization of a seismic inversion code using PVM

    SciTech Connect

    Versteeg, R.J.; Gockenback, M.; Symes, W.W.; Kern, M.

    1994-12-31

    This paper presents experience with parallelization using PVM of DSO, a seismic inversion code developed in The Rice Inversion Project. It focuses on one aspect: trying to run efficiently on a cluster of 4 workstations. The authors use a coarse grain parallelism in which they dynamically distribute the shots over the available machines in the cluster. The modeling and migration of their code is parallelized very effectively by this strategy; they have reached a overall performance of 104 Mflops using a configuration of one manager with 3 workers, a speedup of 2.4 versus the serial version, which according to Amdahl`s law is optimal given the current design of their code. Further speedup is currently limited by the non parallelized part of their code optimization, linear algebra and i(o).

  19. Towards Realistic Implementations of a Majorana Surface Code.

    PubMed

    Landau, L A; Plugge, S; Sela, E; Altland, A; Albrecht, S M; Egger, R

    2016-02-01

    Surface codes have emerged as promising candidates for quantum information processing. Building on the previous idea to realize the physical qubits of such systems in terms of Majorana bound states supported by topological semiconductor nanowires, we show that the basic code operations, namely projective stabilizer measurements and qubit manipulations, can be implemented by conventional tunnel conductance probes and charge pumping via single-electron transistors, respectively. The simplicity of the access scheme suggests that a functional code might be in close experimental reach. PMID:26894694

  20. A three-dimensional magnetostatics computer code for insertion devices.

    PubMed

    Chubar, O; Elleaume, P; Chavanne, J

    1998-05-01

    RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented. PMID:15263552

  1. A code for calculating intrabeam scattering and beam lifetime

    SciTech Connect

    Kim, C.H.

    1997-05-01

    Beam emittances in a circular accelerator with a high beam intensity are strongly affected by the small angle intrabeam Coulomb scattering. In the computer simulation model the authors present here they used three coupled nonlinear differential equations to describe the evolution of the emittances in the transverse and the longitudinal planes. These equations include terms which take into account the intra-beam scattering, adiabatic damping, microwave instabilities, synchrotron damping, and quantum excitations. A code is generated to solve the equations numerically and incorporated into a FORTRAN code library. Circular high intensity physics routines are included in the library such as intrabeam scattering, Touschek scattering, and the bunch lengthening effect of higher harmonic cavities. The code runs presently in the PC environment. Description of the code and some examples are presented.

  2. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  3. Progress towards a world-wide code of conduct

    SciTech Connect

    Lee, J.A.N.; Berleur, J.

    1994-12-31

    In this paper the work of the International Federation for Information Processing (IFIP) Task Group on Ethics is described and the recommendations presented to the General Assembly are reviewed. While a common code of ethics or conduct has been not recommended for consideration by the member societies of IMP, a set of guidelines for the establishment and evaluation of codes has been produced and procedures for the assistance of code development have been established within IMP. This paper proposes that the data collected by the Task Group and the proposed guidelines can be used as a tool for the study of codes of practice providing a teachable, learnable educational module in courses related to the ethics of computing and computation, and looks at the next steps in bringing ethical awareness to the IT community.

  4. POPCORN: A comparison of binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  5. A finite element code for electric motor design

    NASA Technical Reports Server (NTRS)

    Campbell, C. Warren

    1994-01-01

    FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.

  6. The Nuremberg Code and the Nuremberg Trial. A reappraisal.

    PubMed

    Katz, J

    1996-11-27

    The Nuremberg Code includes 10 principles to guide physician-investigators in experiments involving human subjects. These principles, particularly the first principle on "voluntary consent," primarily were based on legal concepts because medical codes of ethics existent at the time of the Nazi atrocities did not address consent and other safeguards for human subjects. The US judges who presided over the proceedings did not intend the Code to apply only to the case before them, to be a response to the atrocities committed by the Nazi physicians, or to be inapplicable to research as it is customarily carried on in medical institutions. Instead, a careful reading of the judgment suggests that they wrote the Code for the practice of human experimentation whenever it is being conducted. PMID:8922453

  7. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  8. A parallel and modular deformable cell Car-Parrinello code

    NASA Astrophysics Data System (ADS)

    Cavazzoni, Carlo; Chiarotti, Guido L.

    1999-12-01

    We have developed a modular parallel code implementing the Car-Parrinello [Phys. Rev. Lett. 55 (1985) 2471] algorithm including the variable cell dynamics [Europhys. Lett. 36 (1994) 345; J. Phys. Chem. Solids 56 (1995) 510]. Our code is written in Fortran 90, and makes use of some new programming concepts like encapsulation, data abstraction and data hiding. The code has a multi-layer hierarchical structure with tree like dependences among modules. The modules include not only the variables but also the methods acting on them, in an object oriented fashion. The modular structure allows easier code maintenance, develop and debugging procedures, and is suitable for a developer team. The layer structure permits high portability. The code displays an almost linear speed-up in a wide range of number of processors independently of the architecture. Super-linear speed up is obtained with a "smart" Fast Fourier Transform (FFT) that uses the available memory on the single node (increasing for a fixed problem with the number of processing elements) as temporary buffer to store wave function transforms. This code has been used to simulate water and ammonia at giant planet conditions for systems as large as 64 molecules for ˜50 ps.

  9. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-04-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  10. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  11. OncodriveFML: a general framework to identify coding and non-coding regions with cancer driver mutations.

    PubMed

    Mularoni, Loris; Sabarinathan, Radhakrishnan; Deu-Pons, Jordi; Gonzalez-Perez, Abel; López-Bigas, Núria

    2016-01-01

    Distinguishing the driver mutations from somatic mutations in a tumor genome is one of the major challenges of cancer research. This challenge is more acute and far from solved for non-coding mutations. Here we present OncodriveFML, a method designed to analyze the pattern of somatic mutations across tumors in both coding and non-coding genomic regions to identify signals of positive selection, and therefore, their involvement in tumorigenesis. We describe the method and illustrate its usefulness to identify protein-coding genes, promoters, untranslated regions, intronic splice regions, and lncRNAs-containing driver mutations in several malignancies. PMID:27311963

  12. ADLIB—A simple database framework for beamline codes

    NASA Astrophysics Data System (ADS)

    Mottershead, C. Thomas

    1993-12-01

    There are many well developed codes available for beamline design and analysis. A significant fraction of each of these codes is devoted to processing its own unique input language for describing the problem. None of these large, complex, and powerful codes does everything. Adding a new bit of specialized physics can be a difficult task whose successful completion makes the code even larger and more complex. This paper describes an attempt to move in the opposite direction, toward a family of small, simple, single purpose physics and utility modules, linked by an open, portable, public domain database framework. These small specialized physics codes begin with the beamline parameters already loaded in the database, and accessible via the handful of subroutines that constitute ADLIB. Such codes are easier to write, and inherently organized in a manner suitable for incorporation in model based control system algorithms. Examples include programs for analyzing beamline misalignment sensitivities, for simulating and fitting beam steering data, and for translating among MARYLIE, TRANSPORT, and TRACE3D formats.

  13. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  14. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1993-01-01

    Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.

  15. A Comprehensive Validation Approach Using The RAVEN Code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J; Rinaldi, Ivan; Giannetti, Fabio; Caruso, Gianfranco

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics needed to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.

  16. Codes, standards, and PV power systems. A 1996 status report

    SciTech Connect

    Wiles, J

    1996-06-01

    As photovoltaic (PV) electrical power systems gain increasing acceptance for both off-grid and utility-interactive applications, the safety, durability, and performance of these systems gains in importance. Local and state jurisdictions in many areas of the country require that all electrical power systems be installed in compliance with the requirements of the National Electrical Code{reg_sign} (NEC{reg_sign}). Utilities and governmental agencies are now requiring that PV installations and components also meet a number of Institute of Electrical and Electronic Engineers (IEEE) standards. PV installers are working more closely with licensed electricians and electrical contractors who are familiar with existing local codes and installation practices. PV manufacturers, utilities, balance of systems manufacturers, and standards representatives have come together to address safety and code related issues for future PV installations. This paper addresses why compliance with the accepted codes and standards is needed and how it is being achieved.

  17. ICD-10 mortality coding and the NCIS: a comparative study.

    PubMed

    Daking, Leanne; Dodds, Leonie

    2007-01-01

    The collection and utilisation of mortality data are often hindered by limited access to contextual details of the circumstances surrounding fatal incidents. The National Coroners Information System (NCIS) can provide researchers with access to such information. The NCIS search capabilities have been enhanced by the inclusion of data supplied by the Australian Bureau of Statistics (ABS), specifically the ICD-10 Cause of Death code set. A comparative study was conducted to identify consistencies and differences between ABS ICD-10 codes and those that could be generated by utilising the full NCIS record. Discrepancies between the two sets of codes were detected in over 50% of cases, which highlighted the importance of access to complete and timely documentation in the assignment of accurate and detailed cause of death codes. PMID:18195402

  18. A comprehensive catalogue of the coding and non-coding transcripts of the human inner ear.

    PubMed

    Schrauwen, Isabelle; Hasin-Brumshtein, Yehudit; Corneveaux, Jason J; Ohmen, Jeffrey; White, Cory; Allen, April N; Lusis, Aldons J; Van Camp, Guy; Huentelman, Matthew J; Friedman, Rick A

    2016-03-01

    The mammalian inner ear consists of the cochlea and the vestibular labyrinth (utricle, saccule, and semicircular canals), which participate in both hearing and balance. Proper development and life-long function of these structures involves a highly complex coordinated system of spatial and temporal gene expression. The characterization of the inner ear transcriptome is likely important for the functional study of auditory and vestibular components, yet, primarily due to tissue unavailability, detailed expression catalogues of the human inner ear remain largely incomplete. We report here, for the first time, comprehensive transcriptome characterization of the adult human cochlea, ampulla, saccule and utricle of the vestibule obtained from patients without hearing abnormalities. Using RNA-Seq, we measured the expression of >50,000 predicted genes corresponding to approximately 200,000 transcripts, in the adult inner ear and compared it to 32 other human tissues. First, we identified genes preferentially expressed in the inner ear, and unique either to the vestibule or cochlea. Next, we examined expression levels of specific groups of potentially interesting RNAs, such as genes implicated in hearing loss, long non-coding RNAs, pseudogenes and transcripts subject to nonsense mediated decay (NMD). We uncover the spatial specificity of expression of these RNAs in the hearing/balance system, and reveal evidence of tissue specific NMD. Lastly, we investigated the non-syndromic deafness loci to which no gene has been mapped, and narrow the list of potential candidates for each locus. These data represent the first high-resolution transcriptome catalogue of the adult human inner ear. A comprehensive identification of coding and non-coding RNAs in the inner ear will enable pathways of auditory and vestibular function to be further defined in the study of hearing and balance. Expression data are freely accessible at https://www.tgen.org/home/research

  19. Estimation of ultrasonic attenuation in a bone using coded excitation.

    PubMed

    Nowicki, A; Litniewski, J; Secomski, W; Lewin, P A; Trots, I

    2003-11-01

    This paper describes a novel approach to estimate broadband ultrasound attenuation (BUA) in a bone structure in human in vivo using coded excitation. BUA is an accepted indicator for assessment of osteoporosis. In the tested approach a coded acoustic signal is emitted and then the received echoes are compressed into brief, high amplitude pulses making use of matched filters and correlation receivers. In this way the acoustic peak pressure amplitude probing the tissue can be markedly decreased whereas the average transmitted intensity increases proportionally to the length of the code. This paper examines the properties of three different transmission schemes, based on Barker code, chirp and Golay code. The system designed is capable of generating 16 bits complementary Golay code (CGC), linear frequency modulated (LFM) chirp and 13-bit Barker code (BC) at 0.5 and 1 MHz center frequencies. Both in vivo data acquired from healthy heel bones and in vitro data obtained from human calcaneus were examined and the comparison between the results using coded excitation and two cycles sine burst is presented. It is shown that CGC system allows the effective range of frequencies employed in the measurement of broadband acoustic energy attenuation in the trabecular bone to be doubled in comparison to the standard 0.5 MHz pulse transmission. The algorithm used to calculate the pairs of Golay sequences of the different length, which provide the temporal side-lobe cancellation is also presented. Current efforts are focused on adapting the system developed for operation in pulse-echo mode; this would allow examination and diagnosis of bones with limited access such as hip bone. PMID:14585473

  20. Radiation transport phenomena and modeling - part A: Codes

    SciTech Connect

    Lorence, L.J.

    1997-06-01

    The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped.

  1. Programming a real code in a functional language (part 1)

    SciTech Connect

    Hendrickson, C.P.

    1991-09-10

    For some, functional languages hold the promise of allowing ease of programming massively parallel computers that imperative languages such as Fortran and C do not offer. At LLNL, we have initiated a project to write the physics of a major production code in Sisal, a functional language developed at LLNL in collaboration with researchers throughout the world. We are investigating the expressibility of Sisal, as well as its performance on a shared-memory multiprocessor, the Y-MP. An interesting aspect of the project is that Sisal modules can call Fortran modules, and are callable by them. This eliminates the rewriting of 80% of the production code that would not benefit from parallel execution. Preliminary results indicate that the restrictive nature of the language does not cause problems in expressing the algorithms we have chosen. Some interesting aspects of programming in a mixed functional-imperative environment have surfaced, but can be managed. 8 refs.

  2. HADES, A Code for Simulating a Variety of Radiographic Techniques

    SciTech Connect

    Aufderheide, M B; Henderson, G; von Wittenau, A; Slone, D M; Barty, A; Martz, Jr., H E

    2004-10-28

    It is often useful to simulate radiographic images in order to optimize imaging trade-offs and to test tomographic techniques. HADES is a code that simulates radiography using ray tracing techniques. Although originally developed to simulate X-Ray transmission radiography, HADES has grown to simulate neutron radiography over a wide range of energy, proton radiography in the 1 MeV to 100 GeV range, and recently phase contrast radiography using X-Rays in the keV energy range. HADES can simulate parallel-ray or cone-beam radiography through a variety of mesh types, as well as through collections of geometric objects. HADES was originally developed for nondestructive evaluation (NDE) applications, but could be a useful tool for simulation of portal imaging, proton therapy imaging, and synchrotron studies of tissue. In this paper we describe HADES' current capabilities and discuss plans for a major revision of the code.

  3. A need for a code of ethics in science communication?

    NASA Astrophysics Data System (ADS)

    Benestad, R. E.

    2009-09-01

    The modern western civilization and high standard of living are to a large extent the 'fruits' of scientific endeavor over generations. Some examples include the longer life expectancy due to progress in medical sciences, and changes in infrastructure associated with the utilization of electromagnetism. Modern meteorology is not possible without the state-of-the-art digital computers, satellites, remote sensing, and communications. Science also is of relevance for policy making, e.g. the present hot topic of climate change. Climate scientists have recently become much exposed to media focus and mass communications, a task for which many are not trained. Furthermore, science, communication, and politics have different objectives, and do not necessarily mix. Scientists have an obligation to provide unbiased information, and a code of ethics is needed to give a guidance for acceptable and unacceptable conduct. Some examples of questionable conduct in Norway include using the title 'Ph.D' to imply scientific authority when the person never had obtained such an academic degree, or writing biased and one-sided articles in Norwegian encyclopedia that do not reflect the scientific consensus. It is proposed here that a set of guide lines (for the scientists and journalists) and a code of conduct could provide recommendation for regarding how to act in media - similar to a code of conduct with respect to carrying out research - to which everyone could agree, even when disagreeing on specific scientific questions.

  4. A systems neurophysiology approach to voluntary event coding.

    PubMed

    Petruo, Vanessa A; Stock, Ann-Kathrin; Münchau, Alexander; Beste, Christian

    2016-07-15

    Mechanisms responsible for the integration of perceptual events and appropriate actions (sensorimotor processes) have been subject to intense research. Different theoretical frameworks have been put forward with the "Theory of Event Coding (TEC)" being one of the most influential. In the current study, we focus on the concept of 'event files' within TEC and examine what sub-processes being dissociable by means of cognitive-neurophysiological methods are involved in voluntary event coding. This was combined with EEG source localization. We also introduce reward manipulations to delineate the neurophysiological sub-processes most relevant for performance variations during event coding. The results show that processes involved in voluntary event coding included predominantly stimulus categorization, feature unbinding and response selection, which were reflected by distinct neurophysiological processes (the P1, N2 and P3 ERPs). On a system's neurophysiological level, voluntary event-file coding is thus related to widely distributed parietal-medial frontal networks. Attentional selection processes (N1 ERP) turned out to be less important. Reward modulated stimulus categorization in parietal regions likely reflecting aspects of perceptual decision making but not in other processes. The perceptual categorization stage appears central for voluntary event-file coding. PMID:27153981

  5. A novel 2D wavelength-time chaos code in optical CDMA system

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Xin, Xiangjun; Wang, Yongjun; Zhang, Lijia; Yu, Chongxiu; Meng, Nan; Wang, Houtian

    2012-11-01

    Two-dimensional wavelength-time chaos code is proposed and constructed for a synchronous optical code division multiple access system. The access performance is compared between one-dimensional chaos code, WDM/chaos code and the proposed code. Comparison shows that two-dimensional wavelength-time chaos code possesses larger capacity, better spectral efficiency and bit-error ratio than WDM/chaos combinations and one-dimensional chaos code.

  6. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than...

  7. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than...

  8. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than...

  9. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  10. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  11. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  12. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  13. 41 CFR 102-33.375 - What is a FSCAP Criticality Code?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Criticality Code? 102-33.375 Section 102-33.375 Public Contracts and Property Management Federal Property... Flight Safety Critical Aircraft Parts (fscap) and Life-Limited Parts § 102-33.375 What is a FSCAP Criticality Code? A FSCAP Criticality Code is a code assigned by DOD to indicate the type of FSCAP: Code...

  14. A parallel TreeSPH code for galaxy formation

    NASA Astrophysics Data System (ADS)

    Lia, Cesario; Carraro, Giovanni

    2000-05-01

    We describe a new implementation of a parallel TreeSPH code with the aim of simulating galaxy formation and evolution. The code has been parallelized using shmem, a Cray proprietary library to handle communications between the 256 processors of the Silicon Graphics T3E massively parallel supercomputer hosted by the Cineca Super-computing Center (Bologna, Italy).1 The code combines the smoothed particle hydrodynamics (SPH) method for solving hydrodynamical equations with the popular Barnes & Hut tree-code to perform gravity calculation with an N×logN scaling, and it is based on the scalar TreeSPH code developed by Carraro et al. Parallelization is achieved by distributing particles along processors according to a workload criterion. Benchmarks, in terms of load balance and scalability, of the code are analysed and critically discussed against the adiabatic collapse of an isothermal gas sphere test using 2×104 particles on 8 processors. The code results balance at more than the 95per cent level. Increasing the number of processors, the load balance slightly worsens. The deviation from perfect scalability for increasing number of processors is almost negligible up to 32 processors. Finally, we present a simulation of the formation of an X-ray galaxy cluster in a flat cold dark matter cosmology, using 2×105 particles and 32 processors, and compare our results with Evrard's P3M-SPH simulations. Additionally we have incorporated radiative cooling, star formation, feedback from SNe of types II and Ia, stellar winds and UV flux from massive stars, and an algorithm to follow the chemical enrichment of the interstellar medium. Simulations with some of these ingredients are also presented.

  15. LUDWIG: A parallel Lattice-Boltzmann code for complex fluids

    NASA Astrophysics Data System (ADS)

    Desplat, Jean-Christophe; Pagonabarraga, Ignacio; Bladon, Peter

    2001-03-01

    This paper describes Ludwig, a versatile code for the simulation of Lattice-Boltzmann (LB) models in 3D on cubic lattices. In fact, Ludwig is not a single code, but a set of codes that share certain common routines, such as I/O and communications. If Ludwig is used as intended, a variety of complex fluid models with different equilibrium free energies are simple to code, so that the user may concentrate on the physics of the problem, rather than on parallel computing issues. Thus far, Ludwig's main application has been to symmetric binary fluid mixtures. We first explain the philosophy and structure of Ludwig which is argued to be a very effective way of developing large codes for academic consortia. Next we elaborate on some parallel implementation issues such as parallel I/O, and the use of MPI to achieve full portability and good efficiency on both MPP and SMP systems. Finally, we describe how to implement generic solid boundaries, and look in detail at the particular case of a symmetric binary fluid mixture near a solid wall. We present a novel scheme for the thermodynamically consistent simulation of wetting phenomena, in the presence of static and moving solid boundaries, and check its performance.

  16. Bounds of the bit error probability of a linear cyclic code over GF(2 exp l) and its extended code

    NASA Technical Reports Server (NTRS)

    Cheng, Unjeng; Huth, Gaylord K.

    1988-01-01

    An upper bound on the bit-error probability (BEP) of a linear cyclic code over GF(2 exp l) with hard-decision (HD) maximum-likelihood (ML) decoding on memoryless symmetric channels is derived. Performance results are presented for Reed-Solomon codes on GF(32), GF(64), and GF(128). Also, a union upper bound on the BEP of a linear cyclic code with either HD or soft-decision (SD) ML decoding is developed, as well as the corresponding bounds for the extended code of a linear cyclic code. Using these bounds, which are tight at low bit error rate, the performance advantage of SD and HD ML over bounded-distance decoding is established.

  17. Experimental qualification of a code for optimizing gamma irradiation facilities

    NASA Astrophysics Data System (ADS)

    Mosse, D. C.; Leizier, J. J. M.; Keraron, Y.; Lallemant, T. F.; Perdriau, P. D. M.

    Dose computation codes are a prerequisite for the design of gamma irradiation facilities. Code quality is a basic factor in the achievement of sound economic and technical performance by the facility. This paper covers the validation of a code by reference dosimetry experiments. Developed by the "Société Générale pour les Techniques Nouvelles" (SGN), a supplier of irradiation facilities and member of the CEA Group, the code is currently used by that company. (ERHART, KERARON, 1986) Experimental data were obtained under conditions representative of those prevailing in the gamma irradiation of foodstuffs. Irradiation was performed in POSEIDON, a Cobalt 60 cell of ORIS-I. Several Cobalt 60 rods of known activity are arranged in a planar array typical of industrial irradiation facilities. Pallet density is uniform, ranging from 0 (air) to 0.6. Reference dosimetry measurements were performed by the "Laboratoire de Métrologie des Rayonnements Ionisants" (LMRI) of the "Bureau National de Métrologie" (BNM). The procedure is based on the positioning of more than 300 ESR/alanine dosemeters throughout the various target volumes used. The reference quantity was the absorbed dose in water. The code was validated by a comparison of experimental and computed data. It has proved to be an effective tool for the design of facilities meeting the specific requirements applicable to foodstuff irradiation, which are frequently found difficult to meet.

  18. A new balanced modulation code for a phase-image-based holographic data storage system

    NASA Astrophysics Data System (ADS)

    John, Renu; Joseph, Joby; Singh, Kehar

    2005-08-01

    We propose a new balanced modulation code for coding data pages for phase-image-based holographic data storage systems. The new code addresses the coding subtleties associated with phase-based systems while performing a content-based search in a holographic database. The new code, which is a balanced modulation code, is a modification of the existing 8:12 modulation code, and removes the false hits that occur in phase-based content-addressable systems due to phase-pixel subtractions. We demonstrate the better performance of the new code using simulations and experiments in terms of discrimination ratio while content addressing through a holographic memory. The new code is compared with the conventional coding scheme to analyse the false hits due to subtraction of phase pixels.

  19. A colorful origin for the genetic code: information theory, statistical mechanics and the emergence of molecular codes.

    PubMed

    Tlusty, Tsvi

    2010-09-01

    The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. While the biochemical details of this code were unraveled long ago, its origin is still obscure. We review information-theoretic approaches to the problem of the code's origin and discuss the results of a recent work that treats the code in terms of an evolving, error-prone information channel. Our model - which utilizes the rate-distortion theory of noisy communication channels - suggests that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino-acids, for error-tolerance and for minimal cost of resources. The description of the code as an information channel allows us to mathematically identify the fitness of the code and locate its emergence at a second-order phase transition when the mapping of codons to amino-acids becomes nonrandom. The noise in the channel brings about an error-graph, in which edges connect codons that are likely to be confused. The emergence of the code is governed by the topology of the error-graph, which determines the lowest modes of the graph-Laplacian and is related to the map coloring problem. PMID:20558115

  20. Performance analysis of a multilevel coded modulation system

    NASA Astrophysics Data System (ADS)

    Kofman, Yosef; Zehavi, Ephraim; Shamai, Shlomo

    1994-02-01

    A modified version of the multilevel coded modulation scheme of Imai & Hirakawa is presented and analyzed. In the transmitter, the outputs of the component codes are bit interleaved prior to mapping into 8-PSK channel signals. A multistage receiver is considered, in which the output amplitudes of the Gaussian channel are soft limited before entering the second and third stage decoders. Upper bounds and Gaussian approximations for the bit error probability of every component code, which take into account errors in previously decoded stages, are presented. Aided by a comprehensive computer simulation, it is demonstrated in a specific example that the addition of the interleaver and soft limiter in the third stage improves its performance by 1.1 dB at a bit error probability of 10(exp -5), and that the multilevel scheme improves on an Ungerboeck's code with the same decoding complexity. The rate selection of the component codes is also considered and a simple selection rule, based on information theoretic arguments, is provided.

  1. Organizing conceptual knowledge in humans with a gridlike code.

    PubMed

    Constantinescu, Alexandra O; O'Reilly, Jill X; Behrens, Timothy E J

    2016-06-17

    It has been hypothesized that the brain organizes concepts into a mental map, allowing conceptual relationships to be navigated in a manner similar to that of space. Grid cells use a hexagonally symmetric code to organize spatial representations and are the likely source of a precise hexagonal symmetry in the functional magnetic resonance imaging signal. Humans navigating conceptual two-dimensional knowledge showed the same hexagonal signal in a set of brain regions markedly similar to those activated during spatial navigation. This gridlike signal is consistent across sessions acquired within an hour and more than a week apart. Our findings suggest that global relational codes may be used to organize nonspatial conceptual representations and that these codes may have a hexagonal gridlike pattern when conceptual knowledge is laid out in two continuous dimensions. PMID:27313047

  2. A Data Parallel Multizone Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1995-01-01

    We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.

  3. The performance of a sequential acquisition system for PN codes

    NASA Astrophysics Data System (ADS)

    Kerr, R. W.; Arakaki, E. M.; Huang, M. Y.

    Direct sequence spread spectrum techniques are being applied in an increasing number of advanced communication systems where anti-jam (AJ), low probability of intercept (LPI), or code division multiple access (CDMA) capabilities are required. In all these systems, rapid acquisition of long PN code is a system necessity. Generally, acquisition of long PN codes is accomplished by correlation measurements of the incoming sequence with a locally generated code sequence. However, instead of utilizing fixed integration times, a sequential acquisition technique could also be used for active correlation, which results in greatly reduced acquisition times. TRW has designed and completed a limited production of 33 spread spectrum receivers for use with the NASA Tracking Data Relay Satellite System (TDRSS). The receivers provide multiple access and ranging capability while simultaneously decreasing the transmitted power flux density to meet CCIR restrictions. This paper presents the analysis, hardware description, and performance of the sequential code acquisition system implemented on these receivers. A unique noise calibration process, which holds the key to successful operation of these receivers, is described in detail.

  4. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  5. European coding system for tissues and cells: a challenge unmet?

    PubMed

    Reynolds, Melvin; Warwick, Ruth M; Poniatowski, Stefan; Trias, Esteve

    2010-11-01

    The Comité Européen de Normalisation (European Committee for Standardization, CEN) Workshop on Coding of Information and Traceability of Human Tissues and Cells was established by the Expert Working Group of the Directorate General for Health and Consumer Affairs of the European Commission (DG SANCO) to identify requirements concerning the coding of information and the traceability of human tissues and cells, and propose guidelines and recommendations to permit the implementation of the European Coding system required by the European Tissues and Cells Directive 2004/23/EC (ED). The Workshop included over 70 voluntary participants from tissue, blood and eye banks, national ministries for healthcare, transplant organisations, universities and coding organisations; mainly from Europe with a small number of representatives from professionals in Canada, Australia, USA and Japan. The Workshop commenced in April 2007 and held its final meeting in February 2008. The draft Workshop Agreement went through a public comment phase from 15 December 2007 until 15 January 2008 and the endorsement period ran from 9 April 2008 until 2 May 2008. The endorsed CEN Workshop Agreement (CWA) set out the issues regarding a common coding system, qualitatively assessed what the industry felt was required of a coding system, reviewed coding systems that were put forward as potential European coding systems and established a basic specification for a proposed European coding system for human tissues and cells, based on ISBT 128, and which is compatible with existing systems of donation identification, traceability and nomenclatures, indicating how implementation of that system could be approached. The CWA, and the associated Workshop proposals with recommendations, were finally submitted to the European Commission and to the Committee of Member States that assists its management process under article 29 of the Directive 2004/23/EC on May 25 2008. In 2009 the European Commission initiated an

  6. Overview of WARP, a particle code for Heavy Ion Fusion

    SciTech Connect

    Friedman, A.; Grote, D.P.; Callahan, D.A.; Langdon, A.B.; Haber, I.

    1993-02-22

    The beams in a Heavy Ion beam driven inertial Fusion (HIF) accelerator must be focused onto small spots at the fusion target, and so preservation of beam quality is crucial. The nonlinear self-fields of these space-charge-dominated beams can lead to emittance growth; thus a self-consistent field description is necessary. We have developed a multi-dimensional discrete-particle simulation code, WARP, and are using it to study the behavior of HIF beams. The code`s 3d package combines features of an accelerator code and a particle-in-cell plasma simulation, and can efficiently track beams through many lattice elements and around bends. We have used the code to understand the physics of aggressive drift-compression in the MBE-4 experiment at Lawrence Berkeley Laboratory (LBL). We have applied it to LBL`s planned ILSE experiments, to various ``recirculator`` configurations, and to the study of equilibria and equilibration processes. Applications of the 3d package to ESQ injectors, and of the r, z package to longitudinal stability in driver beams, are discussed in related papers.

  7. Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.

    2000-01-01

    The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.

  8. Parallel Processing of a Groundwater Contaminant Code

    SciTech Connect

    Arnett, Ronald Chester; Greenwade, Lance Eric

    2000-05-01

    The U. S. Department of Energy’s Idaho National Engineering and Environmental Laboratory (INEEL) is conducting a field test of experimental enhanced bioremediation of trichoroethylene (TCE) contaminated groundwater. TCE is a chlorinated organic substance that was used as a solvent in the early years of the INEEL and disposed in some cases to the aquifer. There is an effort underway to enhance the natural bioremediation of TCE by adding a non-toxic substance that serves as a feed material for the bacteria that can biologically degrade the TCE.

  9. Synergy from Silence in a Combinatorial Neural Code

    PubMed Central

    Schneidman, Elad; Puchalla, Jason L.; Segev, Ronen; Harris, Robert A.; Bialek, William; Berry, Michael J.

    2011-01-01

    The manner in which groups of neurons represent events in the external world is a central question in neuroscience. Estimation of the information encoded by small groups of neurons has shown that in many neural systems, cells carry mildly redundant information. These measures average over all the activity patterns of a neural population. Here, we analyze the population code of the salamander and guinea pig retinas by quantifying the information conveyed by specific multi-cell activity patterns. Synchronous spikes, even though they are relatively rare and highly informative, convey less information than the sum of either spike alone, making them redundant coding symbols. Instead, patterns of spiking in one cell and silence in others, which are relatively common and often overlooked as special coding symbols, were found to be mostly synergistic. Our results reflect that the mild average redundancy between ganglion cells that was previously reported is actually the result of redundant and synergistic multi-cell patterns, whose contributions partially cancel each other when taking the average over all patterns. We further show that similar coding properties emerge in a generic model of neural responses, suggesting that this form of combinatorial coding, in which specific compound patterns carry synergistic or redundant information, may exist in other neural circuits. PMID:22049416

  10. A Coach's Code of Conduct. Position Statement

    ERIC Educational Resources Information Center

    Lyman, Linda; Ewing, Marty; Martino, Nan

    2009-01-01

    Coaches exert a profound impact on our youths; therefore, society sets high expectations for them. As such, whether coaches are compensated or work solely as volunteers, they are responsible for executing coaching as a professional. If we are to continue to enhance the cultural perceptions of coaching, we must strive to develop and master the…

  11. A neural coding scheme reproducing foraging trajectories

    PubMed Central

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-01-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series. PMID:26648311

  12. A neural coding scheme reproducing foraging trajectories

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-12-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series.

  13. A neural coding scheme reproducing foraging trajectories.

    PubMed

    Gutiérrez, Esther D; Cabrera, Juan Luis

    2015-01-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series. PMID:26648311

  14. Imaging The Genetic Code of a Virus

    NASA Astrophysics Data System (ADS)

    Graham, Jenna; Link, Justin

    2013-03-01

    Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University

  15. Incorporation of Condensation Heat Transfer in a Flow Network Code

    NASA Technical Reports Server (NTRS)

    Anthony, Miranda; Majumdar, Alok; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    In this paper we have investigated the condensation of water vapor in a short tube. A numerical model of condensation heat transfer was incorporated in a flow network code. The flow network code that we have used in this paper is Generalized Fluid System Simulation Program (GFSSP). GFSSP is a finite volume based flow network code. Four different condensation models were presented in the paper. Soliman's correlation has been found to be the most stable in low flow rates which is of particular interest in this application. Another highlight of this investigation is conjugate or coupled heat transfer between solid or fluid. This work was done in support of NASA's International Space Station program.

  16. Colour coding scrubs as a means of improving perioperative communication.

    PubMed

    Litak, Dominika

    2011-05-01

    Effective communication within the operating department is essential for achieving patient safety. A large part of the perioperative communication is non-verbal. One type of non-verbal communication is 'object communication', the most common form of which is clothing. The colour coding of clothing such as scrubs has the potential to optimise perioperative communication with the patients and between the staff. A colour contains a coded message, and is a visual cue for an immediate identification of personnel. This is of key importance in the perioperative environment. The idea of colour coded scrubs in the perioperative setting has not been much explored to date and, given the potential contributiontowards improvement of patient outcomes, deserves consideration. PMID:21834289

  17. Turbulence requirements of a commerical CFD code

    NASA Technical Reports Server (NTRS)

    Vandoormaal, J. P.; Mueller, C. M.; Raw, M. J.

    1995-01-01

    This viewgraph presentation gives a profile of Advanced Scientific Computing (ASC) Ltd., applications, clients and clients' needs, ASC's directions, and how the Center for Modeling of Turbulence and Transition (CMOTT) can help.

  18. SCAMPI: A code package for cross-section processing

    SciTech Connect

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  19. Image Coding By Vector Quantization In A Transformed Domain

    NASA Astrophysics Data System (ADS)

    Labit, C.; Marescq, J. P...

    1986-05-01

    Using vector quantization in a transformed domain, TV images are coded. The method exploit spatial redundancies of small 4x4 blocks of pixel : first, a DCT (or Hadamard) trans-form is performed on these blocks. A classification algorithm ranks them into visual and transform properties-based classes. For each class, high energy carrying coefficients are retained and using vector quantization, a codebook is built for the AC remaining part of the transformed blocks. The whole of the codeworks are referenced by an index. Each block is then coded by specifying its DC coefficient and associated index.

  20. NASTRAN as a resource in code development

    NASA Technical Reports Server (NTRS)

    Stanton, E. L.; Crain, L. M.; Neu, T. F.

    1975-01-01

    A case history is presented in which the NASTRAN system provided both guidelines and working software for use in the development of a discrete element program, PATCHES-111. To avoid duplication and to take advantage of the wide spread user familiarity with NASTRAN, the PATCHES-111 system uses NASTRAN bulk data syntax, NASTRAN matrix utilities, and the NASTRAN linkage editor. Problems in developing the program are discussed along with details on the architecture of the PATCHES-111 parametric cubic modeling system. The system includes model construction procedures, checkpoint/restart strategies, and other features.

  1. Requirements for a multifunctional code architecture

    SciTech Connect

    Tiihonen, O.; Juslin, K.

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed.

  2. DART: A simulation code for charged particle beams: Revision 1

    SciTech Connect

    White, R.C.; Barr, W.L.; Moir, R.W.

    1989-07-31

    This paper presents a recently modified version of the 2-D code, DART, which can simulate the behavior of a beam of charged particles whose trajectories are determined by electric and magnetic fields. This code was originally used to design laboratory-scale and full-scale beam direct converters. Since then, its utility has been expanded to allow more general applications. The simulation includes space charge, secondary electrons, and the ionization of neutral gas. A beam can contain up to nine superimposed beamlets of different energy and species. The calculation of energy conversion efficiency and the method of specifying the electrode geometry are described. Basic procedures for using the code are given, and sample input and output fields are shown. 7 refs., 18 figs.

  3. Parallelization of a three-dimensional compressible transition code

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Bokhari, Shahid H.

    1990-01-01

    The compressible, three-dimensional, time-dependent Navier-Stokes equations are solved on a 20 processor Flex/32 computer. The code is a parallel implementation of an existing code operational on the Cray-2 at NASA Ames, which performs direct simulations of the initial stages of the transition process of wall-bounded flow at supersonic Mach numbers. Spectral collocation in all three spatial directions (Fourier along the plate and Chebyshev normal to it) ensures high accuracy of the flow variables. By hiding most of the parallelism in low-level routines, the casual user is shielded from most of the nonstandard coding constructs. Speedups of 13 out of a maximum of 16 are achieved on the largest computational grids.

  4. Error correction coding for a meteor burst channel

    NASA Astrophysics Data System (ADS)

    Miller, Scott L.; Milstein, Laurence B.

    1990-09-01

    The time-varying-SNR model for the meteor burst (MB) channel is reviewed. Bounds on the capacity of the channel are derived for both a constant SNR model and a time-varying SNR model. These bounds show that there is a significant throughput improvement to be gained by using forward error correction. Two methods are given for determining the performance of an MB system when packets of information are encoded with an (n,k) linear block code. Numerical results are generated using high-rate BCH codes, and it is found that about 25 percent improvement over uncoded systems can be obtained by choosing the code rate properly. In addition, some suggestions for techniques that provide further improvement are given.

  5. Effects of bar coding on a pharmacy stock replenishment system.

    PubMed

    Chester, M I; Zilz, D A

    1989-07-01

    A bar-code stock ordering system installed in the ambulatory-care pharmacy and sterile products area of a hospital pharmacy was compared with a manual paper system to quantify overall time demands and determine the error rate associated with each system. The bar-code system was implemented in the ambulatory-care pharmacy in November 1987 and in the sterile products area in January 1988. It consists of a Trakker 9440 transaction manager with a digital scanner; labels are printed with a dot matrix printer. Electronic scanning of bar-code labels and entry of the amount required using the key-pad on the transaction manager replaced use of a preprinted form for ordering items. With the bar-code system, ordering information is transferred electronically via cable to the pharmacy inventory computer; with the manual system, this information was input by a stockroom technician. To compare the systems, the work of technicians in the ambulatory-care pharmacy and sterile products area was evaluated before and after implementation of the bar-code system. The time requirements for information gathering and data transfer were recorded by direct observation; the prevalence of errors under each system was determined by comparing unprocessed ordering information with the corresponding computer-generated "pick lists" (itemized lists including the amount of each product ordered). Time consumed in extra trips to the stockroom to replace out-of-stock items was self-reported. Significantly less time was required to order stock and transfer data to the pharmacy inventory computer with the bar-code system than with the manual system.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2757044

  6. A Method for Automated Program Code Testing

    ERIC Educational Resources Information Center

    Drasutis, Sigitas; Motekaityte, Vida; Noreika, Algirdas

    2010-01-01

    The Internet has recently encouraged the society to convert almost all its needs to electronic resources such as e-libraries, e-cultures, e-entertainment as well as e-learning, which has become a radical idea to increase the effectiveness of learning services in most schools, colleges and universities. E-learning can not be completely featured and…

  7. A versatile integrated block codes encoder-decoder

    NASA Astrophysics Data System (ADS)

    Laurent, P. A.

    1989-12-01

    A new Very Large Scale Integrated (VLSI) circuit which is designed to perform encoding and decoding of almost all Reed-Solomon and BCH codes (including generalized BCH) using symbol sizes from 1 to 8 bits. It is fully programmable by many standard microprocessors which consider it like any other more common co-processor. Its architecture allows a high bit rate and a great flexibility. The interfacing protocol is optimized for minimizing time constraint (mail boxes) and limiting programming effort: no advanced knowledge of codes is required to use it.

  8. A Hydrochemical Hybrid Code for Astrophysical Problems. I. Code Verification and Benchmarks for a Photon-dominated Region (PDR)

    NASA Astrophysics Data System (ADS)

    Motoyama, Kazutaka; Morata, Oscar; Shang, Hsien; Krasnopolsky, Ruben; Hasegawa, Tatsuhiko

    2015-07-01

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H2 are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  9. [Space coding: a Nobel prize diary].

    PubMed

    Rondi-Reig, Laure

    2015-02-01

    The Nobel Prize in Medecine or Physiology for 2014 has been awarded to three neuroscientists: John O'Keefe, May-Britt Moser and Edvard Moser for "their discoveries of cells that constitute a positioning system in the brain". This rewards innovative ideas which led to the development of intracerebral recording techniques in freely moving animals, thus providing links between behavior and physiology. This prize highlights how neural activity sustains our ability to localize ourselves and move around in the environment. This research provides key insights on how the brain drives behavior. PMID:25744268

  10. A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.

    2000-10-01

    Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.

  11. A Spectral Verification of the HELIOS-2 Lattice Physics Code

    SciTech Connect

    D. S. Crawford; B. D. Ganapol; D. W. Nigg

    2012-11-01

    Core modeling of the Advanced Test Reactor (ATR) at INL is currently undergoing a significant update through the Core Modeling Update Project1. The intent of the project is to bring ATR core modeling in line with today’s standard of computational efficiency and verification and validation practices. The HELIOS-2 lattice physics code2 is the lead code of several reactor physics codes to be dedicated to modernize ATR core analysis. This presentation is concerned with an independent verification of the HELIOS-2 spectral representation including the slowing down and thermalization algorithm and its data dependency. Here, we will describe and demonstrate a recently developed simple cross section generation algorithm based entirely on analytical multigroup parameters for both the slowing down and thermal spectrum. The new capability features fine group detail to assess the flux and multiplication factor dependencies on cross section data sets using the fundamental infinite medium as an example.

  12. Towards a 3D Space Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathl, R. K.; Cicomptta, F. A.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    High-speed computational procedures for space radiation shielding have relied on asymptotic expansions in terms of the off-axis scatter and replacement of the general geometry problem by a collection of flat plates. This type of solution was derived for application to human rated systems in which the radius of the shielded volume is large compared to the off-axis diffusion limiting leakage at lateral boundaries. Over the decades these computational codes are relatively complete and lateral diffusion effects are now being added. The analysis for developing a practical full 3D space shielding code is presented.

  13. DART: a simulation code for charged particle beams

    SciTech Connect

    White, R.C.; Barr, W.L.; Moir, R.W.

    1988-05-16

    This paper presents a recently modified verion of the 2-D DART code designed to simulate the behavior of a beam of charged particles whose paths are affected by electric and magnetic fields. This code was originally used to design laboratory-scale and full-scale beam direct converters. Since then, its utility has been expanded to allow more general applications. The simulation technique includes space charge, secondary electron effects, and neutral gas ionization. Calculations of electrode placement and energy conversion efficiency are described. Basic operation procedures are given including sample input files and output. 7 refs., 18 figs.

  14. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  15. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  16. GRADSPMHD: A parallel MHD code based on the SPH formalism

    NASA Astrophysics Data System (ADS)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a

  17. A Post-Monte-Carlo Sensitivity Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  18. ELEFANT: a user-friendly multipurpose geodynamics code

    NASA Astrophysics Data System (ADS)

    Thieulot, C.

    2014-07-01

    A new finite element code for the solution of the Stokes and heat transport equations is presented. It has purposely been designed to address geological flow problems in two and three dimensions at crustal and lithospheric scales. The code relies on the Marker-in-Cell technique and Lagrangian markers are used to track materials in the simulation domain which allows recording of the integrated history of deformation; their (number) density is variable and dynamically adapted. A variety of rheologies has been implemented including nonlinear thermally activated dislocation and diffusion creep and brittle (or plastic) frictional models. The code is built on the Arbitrary Lagrangian Eulerian kinematic description: the computational grid deforms vertically and allows for a true free surface while the computational domain remains of constant width in the horizontal direction. The solution to the large system of algebraic equations resulting from the finite element discretisation and linearisation of the set of coupled partial differential equations to be solved is obtained by means of the efficient parallel direct solver MUMPS whose performance is thoroughly tested, or by means of the WISMP and AGMG iterative solvers. The code accuracy is assessed by means of many geodynamically relevant benchmark experiments which highlight specific features or algorithms, e.g., the implementation of the free surface stabilisation algorithm, the (visco-)plastic rheology implementation, the temperature advection, the capacity of the code to handle large viscosity contrasts. A two-dimensional application to salt tectonics presented as case study illustrates the potential of the code to model large scale high resolution thermo-mechanically coupled free surface flows.

  19. A test of metabolically efficient coding in the retina.

    PubMed

    Balasubramanian, Vijay; Berry, Michael J

    2002-11-01

    We tested the hypothesis that aspects of the neural code of retinal ganglion cells are optimized to transmit visual information at minimal metabolic cost. Under a broad ensemble of light patterns, ganglion cell spike trains consisted of sparse, precise bursts of spikes. These bursts were viewed as independent neural symbols. The noise in each burst was measured via repeated presentation of the visual stimulus, and the energy cost was estimated from the total charge flow during ganglion cell spiking. Given these costs and noise, the theory of efficient codes predicts an optimal distribution of symbol usage. Symbols that are either noisy or costly occur less frequently in this optimal code. We found good qualitative and quantitative agreement with the measured distribution of burst sizes for ganglion cells in the tiger salamander retina. PMID:12463343

  20. Validation of a comprehensive space radiation transport code.

    PubMed

    Shinn, J L; Cucinotta, F A; Simonsen, L C; Wilson, J W; Badavi, F F; Badhwar, G D; Miller, J; Zeitlin, C; Heilbronn, L; Tripathi, R K; Clowdsley, M S; Heinbockel, J H; Xapsos, M A

    1998-12-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation. PMID:11542474

  1. A Radiation Shielding Code for Spacecraft and Its Validation

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.

    2000-01-01

    The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.

  2. VIPRE-01: A thermal-hydraulic code for reactor cores:

    SciTech Connect

    Stewart, C.W.; Cuta, J.M.

    1988-03-01

    VIPRE (Versatile Internals and Component Program for Reactors;EPRI) has been developed for nuclear power utility thermal-hydraulic analysis applications. It is designed to help evaluate nuclear reactor core safety limits including minimum departure from nucleate boiling ratio (NDNBR), critical power ratio (CPR), fuel and clad temperatures, and coolant state in normal operation and assumed accident conditions. This volume discusses general and specific considerations in using VIPRE as a thermal-hydraulic analysis tool. Volume 1: Mathematical Modeling, explains the major thermal-hydraulic models and supporting mathematial correlations in detail. Volume 2: Users's Manual, describes the input requirements of the codes in the VIPRE code package. Volume 3: Programmer's Manual, explains the code structure and computer interface. Experimence in running VIPRE is documented in Volume 4: Applications. 25 refs., 31 figs., 7 tabs.

  3. Coding the Composing Process: A Guide for Teachers and Researchers.

    ERIC Educational Resources Information Center

    Perl, Sondra

    Designed for teachers and researchers interested in the study of the composing process, this guide introduces a method of analysis that can be applied to data from a range of different cases. Specifically, the guide offers a simple, direct coding scheme for describing the movements occurring during composing that involves four procedures: teaching…

  4. Wolof Syllable Structure: Evidence from a Secret Code.

    ERIC Educational Resources Information Center

    Ka, Omar

    A structural analysis provides new evidence concerning the internal structure of the syllable in Wolof, a West African language, through examination of the secret code called Kall, spoken mainly in Senegal's Ceneba area. It is proposed that Kall is better described as involving primarily a reduplication of the prosodic word. The first section…

  5. A Learning Environment for English Vocabulary Using Quick Response Codes

    ERIC Educational Resources Information Center

    Arikan, Yuksel Deniz; Ozen, Sevil Orhan

    2015-01-01

    This study focuses on the process of developing a learning environment that uses tablets and Quick Response (QR) codes to enhance participants' English language vocabulary knowledge. The author employed the concurrent triangulation strategy, a mixed research design. The study was conducted at a private school in Izmir, Turkey during the 2012-2013…

  6. Librarianship Needs a New Code of Professional Ethics.

    ERIC Educational Resources Information Center

    Finks, Lee W.

    1991-01-01

    Discussion of professional ethics focuses on a study by Johan Bekker that recommends a new code of ethics for librarians. Topics discussed include confidential information, extraoccupational activities, continuing education, research, occupational development, membership in occupational associations, and peer group role. A sidebar presents…

  7. RADTRAN 5: A computer code for transportation risk analysis

    SciTech Connect

    Neuhauser, K. S.; Kanipe, F. L.

    1991-01-01

    RADTRAN 5 is a computer code developed at Sandia National Laboratories (SNL) in Albuquerque, NM, to estimate radiological and nonradiological risks of radioactive materials transportation. RADTRAN 5 is written in ANSI Standard FORTRAN 77 and contains significant advances in the methodology for route-specific analysis first developed by SNL for RADTRAN 4 (Neuhauser and Kanipe, 1992). Like the previous RADTRAN codes, RADTRAN 5 contains two major modules for incident-free and accident risk amlysis, respectively. All commercially important transportation modes may be analyzed with RADTRAN 5: highway by combination truck; highway by light-duty vehicle; rail; barge; ocean-going ship; cargo air; and passenger air.

  8. TAS: A Transonic Aircraft/Store flow field prediction code

    NASA Technical Reports Server (NTRS)

    Thompson, D. S.

    1983-01-01

    A numerical procedure has been developed that has the capability to predict the transonic flow field around an aircraft with an arbitrarily located, separated store. The TAS code, the product of a joint General Dynamics/NASA ARC/AFWAL research and development program, will serve as the basis for a comprehensive predictive method for aircraft with arbitrary store loadings. This report described the numerical procedures employed to simulate the flow field around a configuration of this type. The validity of TAS code predictions is established by comparison with existing experimental data. In addition, future areas of development of the code are outlined. A brief description of code utilization is also given in the Appendix. The aircraft/store configuration is simulated using a mesh embedding approach. The computational domain is discretized by three meshes: (1) a planform-oriented wing/body fine mesh, (2) a cylindrical store mesh, and (3) a global Cartesian crude mesh. This embedded mesh scheme enables simulation of stores with fins of arbitrary angular orientation.

  9. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  10. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  11. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  12. Designing a Field Code: Environmental Values in Primary School.

    ERIC Educational Resources Information Center

    Aleixandre, Maria Pilar Jimenez; Rodriguez, Ramon Lopez

    2001-01-01

    Analyzes classroom transcriptions from the 4th grade on the elaboration by pupils of a behavior code for a field trip. Characterizes good teaching practice in environmental education. Results show how pupils are able to create and justify their own rules, how they cope with conflicts, and which environmental values are sustained by them. (SAH)

  13. Evaluating QR Code Case Studies Using a Mobile Learning Framework

    ERIC Educational Resources Information Center

    Rikala, Jenni

    2014-01-01

    The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…

  14. NERO- a post-maximum supernova radiation transport code

    NASA Astrophysics Data System (ADS)

    Maurer, I.; Jerkstrand, A.; Mazzali, P. A.; Taubenberger, S.; Hachinger, S.; Kromer, M.; Sim, S.; Hillebrandt, W.

    2011-12-01

    The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a non-local thermodynamic equilibrium (NLTE) treatment of radiation transport has rarely been studied. In this paper, we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand, Fransson & Kozma and to the nebular code of Mazzali et al. All three codes have been developed independently and a comparison provides a valuable opportunity to investigate their reliability. Currently, NERO is one-dimensional and can be used for predicting spectra of synthetic explosion models or for deriving SN properties by spectral modelling. To demonstrate this, we study the spectra of the 'normal' Type Ia supernova (SN Ia) 2005cf between 50 and 350 days after the explosion and identify most of the common SN Ia line features at post-maximum epochs.

  15. Dependent video coding using a tree representation of pixel dependencies

    NASA Astrophysics Data System (ADS)

    Amati, Luca; Valenzise, Giuseppe; Ortega, Antonio; Tubaro, Stefano

    2011-09-01

    Motion-compensated prediction induces a chain of coding dependencies between pixels in video. In principle, an optimal selection of encoding parameters (motion vectors, quantization parameters, coding modes) should take into account the whole temporal horizon of a GOP. However, in practical coding schemes, these choices are made on a frame-by-frame basis, thus with a possible loss of performance. In this paper we describe a tree-based model for pixelwise coding dependencies: each pixel in a frame is the child of a pixel in a previous reference frame. We show that some tree structures are more favorable than others from a rate-distortion perspective, e.g., because they entail a large descendance of pixels which are well predicted from a common ancestor. In those cases, a higher quality has to be assigned to pixels at the top of such trees. We promote the creation of these structures by adding a special discount term to the conventional Lagrangian cost adopted at the encoder. The proposed model can be implemented through a double-pass encoding procedure. Specifically, we devise heuristic cost functions to drive the selection of quantization parameters and of motion vectors, which can be readily implemented into a state-of-the-art H.264/AVC encoder. Our experiments demonstrate that coding efficiency is improved for video sequences with low motion, while there are no apparent gains for more complex motion. We argue that this is due to both the presence of complex encoder features not captured by the model, and to the complexity of the source to be encoded.

  16. A new hydrodynamics code for Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Leung, S.-C.; Chu, M.-C.; Lin, L.-M.

    2015-12-01

    A two-dimensional hydrodynamics code for Type Ia supernova (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing, we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric light curves from the hydrodynamics results. We also use a GPU to speed up the computation of some highly repetitive subroutines. We achieve an acceleration of 50 times for some subroutines and a factor of 6 in the global run time.

  17. Beyond a code of ethics: phenomenological ethics for everyday practice.

    PubMed

    Greenfield, Bruce; Jensen, Gail M

    2010-06-01

    Physical therapy, like all health-care professions, governs itself through a code of ethics that defines its obligations of professional behaviours. The code of ethics provides professions with a consistent and common moral language and principled guidelines for ethical actions. Yet, and as argued in this paper, professional codes of ethics have limits applied to ethical decision-making in the presence of ethical dilemmas. Part of the limitations of the codes of ethics is that there is no particular hierarchy of principles that govern in all situations. Instead, the exigencies of clinical practice, the particularities of individual patient's illness experiences and the transformative nature of chronic illnesses and disabilities often obscure the ethical concerns and issues embedded in concrete situations. Consistent with models of expert practice, and with contemporary models of patient-centred care, we advocate and describe in this paper a type of interpretative and narrative approach to moral practice and ethical decision-making based on phenomenology. The tools of phenomenology that are well defined in research are applied and examined in a case that illustrates their use in uncovering the values and ethical concerns of a patient. Based on the deconstruction of this case on a phenomenologist approach, we illustrate how such approaches for ethical understanding can help assist clinicians and educators in applying principles within the context and needs of each patient. PMID:20564757

  18. Bio—Cryptography: A Possible Coding Role for RNA Redundancy

    NASA Astrophysics Data System (ADS)

    Regoli, M.

    2009-03-01

    The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. The RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions," are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behavior in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.

  19. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  20. HINCOF-1: a Code for Hail Ingestion in Engine Inlets

    NASA Technical Reports Server (NTRS)

    Gopalaswamy, N.; Murthy, S. N. B.

    1995-01-01

    One of the major concerns during hail ingestion into an engine is the resulting amount and space- and time-wise distribution of hail at the engine face for a given geometry of inlet and set of atmospheric and flight conditions. The appearance of hail in the capture streamtube is invariably random in space and time, with respect to size and momentum. During the motion of a hailstone through an inlet, a hailstone undergoes several processes, namely impact with other hailstones and material surfaces of the inlet and spinner, rolling and rebound following impact; heat and mass transfer; phase change; and shattering, the latter three due to friction and impact. Taking all of these factors into account, a numerical code, designated HINCOF-I, has been developed for determining the motion hailstones from the atmosphere, through an inlet, and up to the engine face. The numerical procedure is based on the Monte-Carlo method. The report presents a description of the code, along with several illustrative cases. The code can be utilized to relate the spinner geometry - conical or, more effective, elliptical - to the possible diversion of hail at the engine face into the bypass stream. The code is also useful for assessing the influence of various hail characteristics on the ingestion and distribution of hailstones over the engine face.

  1. Ensuring quality in the coding process: A key differentiator for the accurate interpretation of safety data

    PubMed Central

    Nair, G. Jaya

    2013-01-01

    Medical coding and dictionaries for clinical trials have seen a wave of change over the past decade where emphasis on more standardized tools for coding and reporting clinical data has taken precedence. Coding personifies the backbone of clinical reporting as safety data reports primarily depend on the coded data. Hence, maintaining an optimum quality of coding is quintessential to the accurate analysis and interpretation of critical clinical data. The perception that medical coding is merely a process of assigning numeric/alphanumeric codes to clinical data needs to be revisited. The significance of quality coding and its impact on clinical reporting has been highlighted in this article. PMID:24010060

  2. Ensuring quality in the coding process: A key differentiator for the accurate interpretation of safety data.

    PubMed

    Nair, G Jaya

    2013-07-01

    Medical coding and dictionaries for clinical trials have seen a wave of change over the past decade where emphasis on more standardized tools for coding and reporting clinical data has taken precedence. Coding personifies the backbone of clinical reporting as safety data reports primarily depend on the coded data. Hence, maintaining an optimum quality of coding is quintessential to the accurate analysis and interpretation of critical clinical data. The perception that medical coding is merely a process of assigning numeric/alphanumeric codes to clinical data needs to be revisited. The significance of quality coding and its impact on clinical reporting has been highlighted in this article. PMID:24010060

  3. A Comparison of Source Code Plagiarism Detection Engines

    ERIC Educational Resources Information Center

    Lancaster, Thomas; Culwin, Fintan

    2004-01-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and…

  4. A high performance spectral code for nonlinear MHD stability

    SciTech Connect

    Taylor, M.

    1992-09-01

    A new spectral code, NSTAB, has been developed to do nonlinear stability and equilibrium calculations for the magnetohydrodynamic (MHD) equations in three dimensional toroidal geometries. The code has the resolution to test nonlinear stability by calculating bifurcated equilibria directly. These equilibria consist of weak solutions with current sheets near rational surfaces and other less localized modes. Bifurcated equilibria with a pronounced current sheet where the rotational transform crosses unity are calculated for the International Thermonuclear Experimental Reactor (ITER). Bifurcated solutions with broader resonances are found for the LHD stellarator currently being built in Japan and an optimized configuration like the Wendelstein VII-X proposed for construction in Germany. The code is able to handle the many harmonics required to capture the high mode number of these instabilities. NSTAB builds on the highly successful BETAS code, which applies the spectral method to a flux coordinate formulation of the variational principle associated with the MHD equilibrium equations. However, a new residue condition for the location of the magnetic axis has been developed and implemented. This condition is based on the weak formulation of the equations and imposes no constraints on the inner flux surfaces.

  5. Code-Switching in a College Mathematics Classroom

    ERIC Educational Resources Information Center

    Chitera, Nancy

    2009-01-01

    This paper presents the findings that explored from the discourse practices of the mathematics teacher educators in initial teacher training colleges in Malawi. It examines how mathematics teacher educators construct a multilingual classroom and how they view code-switching. The discussion is based on pre-observation interviews with four…

  6. A New Code for Proto-neutron Star Evolution

    NASA Astrophysics Data System (ADS)

    Roberts, L. F.

    2012-08-01

    A new code for following the evolution and emissions of proto-neutron stars during the first minute of their lives is developed and tested. The code is one dimensional, fully implicit, and general relativistic. Multi-group, multi-flavor neutrino transport is incorporated that makes use of variable Eddington factors obtained from a formal solution of the static general relativistic Boltzmann equation with linearized scattering terms. The timescales of neutrino emission and spectral evolution obtained using the new code are broadly consistent with previous results. Unlike other recent calculations, however, the new code predicts that the neutrino-driven wind will be characterized, at least for part of its existence, by a neutron excess. This change, potentially consequential for nucleosynthesis in the wind, is due to an improved treatment of the charged current interactions of electron-flavored neutrinos and anti-neutrinos with nucleons. A comparison is also made between the results obtained using either variable Eddington factors or simple equilibrium flux-limited diffusion. The latter approximation, which has been frequently used in previous studies of proto-neutron star cooling, accurately describes the total neutrino luminosities (to within 10%) for most of the evolution, until the proto-neutron star becomes optically thin.

  7. Overview of WARP: A particle code for heavy ion fusion

    NASA Astrophysics Data System (ADS)

    Friedman, Alex; Grote, David P.; Callahan, Debra A.; Langdon, A. Bruce; Haber, Irving

    1993-02-01

    The beams in a heavy ion beam driven inertial fusion (HIF) accelerator must be focused onto small spots at the fusion target, and so preservation of beam quality is crucial. The nonlinear self-fields of these space-charge-dominated beams can lead to emittance growth; thus, a self-consistent field description is necessary. We have developed a multi-dimensional discrete-particle simulation code, WARP, and are using it to study the behavior of HIF beams. The code's 3D package combines features of an accelerator code and a particle-in-cell plasma simulation, and can efficiently track beams through many lattice elements and around bends. We have used the code to understand the physics of aggressive drift-compression in the MBE-4 experiment at Lawrence Berkeley Laboratory (LBL). We have applied it to LBL's planned ILSE experiments, to various 'recirculator' configurations, and to the study of equilibria and equilibration processes. Applications of the 3D package to ESQ injectors, and of the r, z package to longitudinal stability in driver beams, are discussed in related papers.

  8. Overview of WARP, a particle code for heavy ion fusion

    NASA Astrophysics Data System (ADS)

    Friedman, Alex; Grote, David P.; Callahan, Debra A.; Langdon, A. Bruce; Haber, Irving

    1993-12-01

    The beams in a Heavy Ion beam driven inertial Fusion (HIF) accelerator must be focused onto small spots at the fusion target, and so preservation of beam quality is crucial. The nonlinear self-fields of these space-charge-dominated beams can lead to emittance growth; thus a self-consistent field description is necessary. We have developed a multi-dimensional discrete-particle simulation code, WARP, and are using it to study the behavior of HIF beams. The code's 3d package combines features of an accelerator code and a particle-in-cell plasma simulation, and can efficiently track beams through many lattice elements and around bends. We have used the code to understand the physics of aggressive drift-compression in the MBE-4 experiment at Lawrence Berkeley Laboratory (LBL). We have applied it to LBL's planned ILSE experiments, to various ``recirculator'' configurations, and to the study of equilibria and equilibration processes. Applications of the 3d package to ESQ injectors, and of the r, z package to longitudinal stability in driver beams, are discussed in related papers.

  9. Design and implementation of a channel decoder with LDPC code

    NASA Astrophysics Data System (ADS)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  10. RTE: A computer code for Rocket Thermal Evaluation

    NASA Technical Reports Server (NTRS)

    Naraghi, Mohammad H. N.

    1995-01-01

    The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.

  11. The GOES Time Code Service, 1974–2004: A Retrospective

    PubMed Central

    Lombardi, Michael A.; Hanson, D. Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  12. Codes of Ethics in Australian Education: Towards a National Perspective

    ERIC Educational Resources Information Center

    Forster, Daniella J.

    2012-01-01

    Teachers have a dual moral responsibility as both values educators and moral agents representing the integrity of the profession. Codes of ethics and conduct in teaching articulate shared professional values and aim to provide some guidance for action around recognised issues special to the profession but are also instruments of regulation which…

  13. Connecting Neural Coding to Number Cognition: A Computational Account

    ERIC Educational Resources Information Center

    Prather, Richard W.

    2012-01-01

    The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…

  14. A NEW CODE FOR PROTO-NEUTRON STAR EVOLUTION

    SciTech Connect

    Roberts, L. F.

    2012-08-20

    A new code for following the evolution and emissions of proto-neutron stars during the first minute of their lives is developed and tested. The code is one dimensional, fully implicit, and general relativistic. Multi-group, multi-flavor neutrino transport is incorporated that makes use of variable Eddington factors obtained from a formal solution of the static general relativistic Boltzmann equation with linearized scattering terms. The timescales of neutrino emission and spectral evolution obtained using the new code are broadly consistent with previous results. Unlike other recent calculations, however, the new code predicts that the neutrino-driven wind will be characterized, at least for part of its existence, by a neutron excess. This change, potentially consequential for nucleosynthesis in the wind, is due to an improved treatment of the charged current interactions of electron-flavored neutrinos and anti-neutrinos with nucleons. A comparison is also made between the results obtained using either variable Eddington factors or simple equilibrium flux-limited diffusion. The latter approximation, which has been frequently used in previous studies of proto-neutron star cooling, accurately describes the total neutrino luminosities (to within 10%) for most of the evolution, until the proto-neutron star becomes optically thin.

  15. The GOES Time Code Service, 1974-2004: A Retrospective.

    PubMed

    Lombardi, Michael A; Hanson, D Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  16. Encoding and decoding a telecommunication standard command code

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Truong, T. K.

    1977-01-01

    A simple encoder/decoder implementation scheme is described for the (63,56) BCH code which can be used to correct single errors and to detect any even-number of errors. The scheme is feasible for onboard-spacecraft implementation.

  17. Validation of a coupled reactive transport code in porous media

    NASA Astrophysics Data System (ADS)

    Mugler, C.; Montarnal, P.; Dimier, A.

    2003-04-01

    The safety assessment of nuclear waste disposals needs to predict the migration of radionuclides and chemical species through a geological medium. It is therefore necessary to develop and assess qualified and validated tools which integrate both the transport mechanisms through the geological media and the chemical mechanisms governing the mobility of radionuclides. In this problem, both geochemical and hydrodynamic phenomena are tightly linked together. That is the reason why the French Nuclear Energy Agency (CEA) and the French Agency for the Management of Radioactive Wastes (ANDRA) are conjointly developping a coupled reactive transport code that solves simultaneously a geochemical model and a transport model. This code, which is part of the software project ALLIANCES, leans on the libraries of two geochemical codes solving the complex ensemble of reacting chemical species: CHESS and PHREEQC. Geochemical processes considered here include ion exchange, redox reactions, acid-base reactions, surface complexation and mineral dissolution and/or precipitation. Transport is simulated using the mixed-hybrid finite element scheme CAST3M or the finite volume scheme MT3D. All together solve Darcy's law and simulate several hydrological processes such as advection, diffusion and dispersion. The coupling algorithm is an iterative sequential algorithm. Several analytical test cases have been defined and used to validate the reactive transport code. Numerical results can be compared to analytical solutions.

  18. A Draft Code of Ethics for Institutional Research.

    ERIC Educational Resources Information Center

    New Directions for Institutional Research, 1992

    1992-01-01

    A draft code of ethics for college and university institutional researchers and related activities, prepared by the Committee on Standards and Ethics of the Association for Institutional Research, is presented. It addresses issues of individual training and competence, responsibility, confidentiality, relationships within institution and…

  19. RAMSES: A new N-body and hydrodynamical code

    NASA Astrophysics Data System (ADS)

    Teyssier, Romain

    2010-11-01

    A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

  20. A semianalytic Monte Carlo code for modelling LIDAR measurements

    NASA Astrophysics Data System (ADS)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  1. Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions

    NASA Astrophysics Data System (ADS)

    Kwak, Kyujin; Yang, Seungwon

    2015-08-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.

  2. A surface definition code for turbine blade surfaces

    SciTech Connect

    Yang, S.L. ); Oryang, D.; Ho, M.J. )

    1992-05-01

    A numerical interpolation scheme has been developed for generating the three-dimensional geometry of wind turbine blades. The numerical scheme consists of (1) creating the frame of the blade through the input of two or more airfoils at some specific spanwise stations and then scaling and twisting them according to the prescribed distributions of chord, thickness, and twist along the span of the blade; (2) transforming the physical coordinates of the blade frame into a computational domain that complies with the interpolation requirements; and finally (3) applying the bi-tension spline interpolation method, in the computational domain, to determine the coordinates of any point on the blade surface. Detailed descriptions of the overall approach to and philosophy of the code development are given along with the operation of the code. To show the usefulness of the bi-tension spline interpolation code developed, two examples are given, namely CARTER and MICON blade surface generation. Numerical results are presented in both graphic data forms. The solutions obtained in this work show that the computer code developed can be a powerful tool for generating the surface coordinates for any three-dimensional blade.

  3. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  4. 46 CFR Appendix A to Part 520 - Standard Terminology and Codes

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 9 2011-10-01 2011-10-01 false Standard Terminology and Codes A Appendix A to Part 520... AUTOMATED TARIFFS Pt. 520, App. A Appendix A to Part 520—Standard Terminology and Codes I—Publishing/Amendment Type Codes Code Definition A Increase. C Change resulting in neither increase nor decrease in...

  5. 46 CFR Appendix A to Part 520 - Standard Terminology and Codes

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 9 2014-10-01 2014-10-01 false Standard Terminology and Codes A Appendix A to Part 520... AUTOMATED TARIFFS Pt. 520, App. A Appendix A to Part 520—Standard Terminology and Codes I—Publishing/Amendment Type Codes Code Definition A Increase. C Change resulting in neither increase nor decrease in...

  6. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  7. Parallelizing a DNA simulation code for the Cray MTA-2.

    PubMed

    Bokhari, Shahid H; Glaser, Matthew A; Jordan, Harry F; Lansac, Yves; Sauer, Jon R; Van Zeghbroeck, Bart

    2002-01-01

    The Cray MTA-2 (Multithreaded Architecture) is an unusual parallel supercomputer that promises ease of use and high performance. We describe our experience on the MTA-2 with a molecular dynamics code, SIMU-MD, that we are using to simulate the translocation of DNA through a nanopore in a silicon based ultrafast sequencer. Our sequencer is constructed using standard VLSI technology and consists of a nanopore surrounded by Field Effect Transistors (FETs). We propose to use the FETs to sense variations in charge as a DNA molecule translocates through the pore and thus differentiate between the four building block nucleotides of DNA. We were able to port SIMU-MD, a serial C code, to the MTA with only a modest effort and with good performance. Our porting process needed neither a parallelism support platform nor attention to the intimate details of parallel programming and interprocessor communication, as would have been the case with more conventional supercomputers. PMID:15838145

  8. Vision aided inertial navigation system augmented with a coded aperture

    NASA Astrophysics Data System (ADS)

    Morrison, Jamie R.

    Navigation through a three-dimensional indoor environment is a formidable challenge for an autonomous micro air vehicle. A main obstacle to indoor navigation is maintaining a robust navigation solution (i.e. air vehicle position and attitude estimates) given the inadequate access to satellite positioning information. A MEMS (micro-electro-mechanical system) based inertial navigation system provides a small, power efficient means of maintaining a vehicle navigation solution; however, unmitigated error propagation from relatively noisy MEMS sensors results in the loss of a usable navigation solution over a short period of time. Several navigation systems use camera imagery to diminish error propagation by measuring the direction to features in the environment. Changes in feature direction provide information regarding direction for vehicle movement, but not the scale of movement. Movement scale information is contained in the depth to the features. Depth-from-defocus is a classic technique proposed to derive depth from a single image that involves analysis of the blur inherent in a scene with a narrow depth of field. A challenge to this method is distinguishing blurriness caused by the focal blur from blurriness inherent to the observed scene. In 2007, MIT's Computer Science and Artificial Intelligence Laboratory demonstrated replacing the traditional rounded aperture with a coded aperture to produce a complex blur pattern that is more easily distinguished from the scene. A key to measuring depth using a coded aperture then is to correctly match the blur pattern in a region of the scene with a previously determined set of blur patterns for known depths. As the depth increases from the focal plane of the camera, the observable change in the blur pattern for small changes in depth is generally reduced. Consequently, as the depth of a feature to be measured using a depth-from-defocus technique increases, the measurement performance decreases. However, a Fresnel zone

  9. DgSMC-B code: A robust and autonomous direct simulation Monte Carlo code for arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Kargaran, H.; Minuchehr, A.; Zolfaghari, A.

    2016-07-01

    In this paper, we describe the structure of a new Direct Simulation Monte Carlo (DSMC) code that takes advantage of combinatorial geometry (CG) to simulate any rarefied gas flows Medias. The developed code, called DgSMC-B, has been written in FORTRAN90 language with capability of parallel processing using OpenMP framework. The DgSMC-B is capable of handling 3-dimensional (3D) geometries, which is created with first-and second-order surfaces. It performs independent particle tracking for the complex geometry without the intervention of mesh. In addition, it resolves the computational domain boundary and volume computing in border grids using hexahedral mesh. The developed code is robust and self-governing code, which does not use any separate code such as mesh generators. The results of six test cases have been presented to indicate its ability to deal with wide range of benchmark problems with sophisticated geometries such as airfoil NACA 0012. The DgSMC-B code demonstrates its performance and accuracy in a variety of problems. The results are found to be in good agreement with references and experimental data.

  10. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q. . Plasma Physics Lab.); Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L. )

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  11. Parallelization of the Legendre Transform for a Geodynamics Code

    NASA Astrophysics Data System (ADS)

    Lokavarapu, H. V.; Matsui, H.; Heien, E. M.

    2014-12-01

    Calypso is a geodynamo code designed to model magnetohydrodynamics of a Boussinesq fluid in a rotating spherical shell, such as the outer core of Earth. The code has been shown to scale well on computer clusters capable of computing at the order of millions of core hours. Depending on the resolution and time requirements, simulations may require weeks to years of clock time for specific target problems. A significant portion of the code execution time is spent transforming computed quantities between physical values and spherical harmonic coefficients, equivalent to a series of linear algebra operations. Intermixing C and Fortran code has opened the door to the parallel computing platform, Cuda and its associated libraries. We successfully implemented the parallelization of the scaling of the Legendre polynomials by both Schmidt Normalization coefficients, and a set of weighting coefficients; however, the expected speedup was not realized. Specifically, the original version of Calypso 1.1 computes the Legendre transform approximately four seconds faster than the Cuda-enabled modified version. By profiling the code, we determined that the time taken to transfer the data from host memory to GPU memory does not compare to the number of computations happening within the GPU. Nevertheless, by utilizing techniques such as memory coalescing, cached memory, pinned memory, dynamic parallelism, asynchronous calls, and overlapped memory transfers with computations, the likelihood of a speedup increases. Moreover, ideally the generation of the Legendre polynomial coefficients, Schmidt Normalization Coefficients, and the set of weights should not only be parallelized but be computed on-the-fly within the GPU. The end result is that we reduce the number of memory transfers from host to GPU, increase the number of parallelized computations on the GPU, and decrease the number of serial computations on the CPU. Also, the time taken to transform physical values to spherical

  12. On a stochastic approach to a code performance estimation

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.

    2016-06-01

    The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.

  13. ICOOL: A SIMULATION CODE FOR IONIZATION COOLING OF MUON BEAMS.

    SciTech Connect

    FERNOW,R.C.

    1999-03-25

    Current ideas [1,2] for designing a high luminosity muon collider require significant cooling of the phase space of the muon beams. The only known method that can cool the beams in a time comparable to the muon lifetime is ionization cooling [3,4]. This method requires directing the particles in the beam at a large angle through a low Z absorber material in a strong focusing magnetic channel and then restoring the longitudinal momentum with an rf cavity. We have developed a new 3-D tracking code ICOOL for examining possible configurations for muon cooling. A cooling system is described in terms of a series of longitudinal regions with associated material and field properties. The tracking takes place in a coordinate system that follows a reference orbit through the system. The code takes into account decays and interactions of {approx}50-500 MeV/c muons in matter. Material geometry regions include cylinders and wedges. A number of analytic models are provided for describing the field configurations. Simple diagnostics are built into the code, including calculation of emittances and correlations, longitudinal traces, histograms and scatter plots. A number of auxiliary files can be generated for post-processing analysis by the user.

  14. Nexus: a modular workflow management system for quantum simulation codes

    DOE PAGESBeta

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less

  15. Nexus: a modular workflow management system for quantum simulation codes

    SciTech Connect

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  16. Nexus: A modular workflow management system for quantum simulation codes

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  17. A Plastic Temporal Brain Code for Conscious State Generation

    PubMed Central

    Dresp-Langley, Birgitta; Durup, Jean

    2009-01-01

    Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms. PMID:19644552

  18. A plastic temporal brain code for conscious state generation.

    PubMed

    Dresp-Langley, Birgitta; Durup, Jean

    2009-01-01

    Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms. PMID:19644552

  19. Undetected error probability and throughput analysis of a concatenated coding scheme

    NASA Technical Reports Server (NTRS)

    Costello, D. J.

    1984-01-01

    The performance of a proposed concatenated coding scheme for error control on a NASA telecommand system is analyzed. In this scheme, the inner code is a distance-4 Hamming code used for both error correction and error detection. The outer code is a shortened distance-4 Hamming code used only for error detection. Interleaving is assumed between the inner and outer codes. A retransmission is requested if either the inner or outer code detects the presence of errors. Both the undetected error probability and the throughput of the system are analyzed. Results indicate that high throughputs and extremely low undetected error probabilities are achievable using this scheme.

  20. CBEAM. 2-D: a two-dimensional beam field code

    SciTech Connect

    Dreyer, K.A.

    1985-05-01

    CBEAM.2-D is a two-dimensional solution of Maxwell's equations for the case of an electron beam propagating through an air medium. Solutions are performed in the beam-retarded time frame. Conductivity is calculated self-consistently with field equations, allowing sophisticated dependence of plasma parameters to be handled. A unique feature of the code is that it is implemented on an IBM PC microcomputer in the BASIC language. Consequently, it should be available to a wide audience.

  1. A domain decomposition scheme for Eulerian shock physics codes

    SciTech Connect

    Bell, R.L.; Hertel, E.S. Jr.

    1994-08-01

    A new algorithm which allows for complex domain decomposition in Eulerian codes was developed at Sandia National Laboratories. This new feature allows a user to customize the zoning for each portion of a calculation and to refine volumes of the computational space of particular interest This option is available in one, two, and three dimensions. The new technique will be described in detail and several examples of the effectiveness of this technique will also be discussed.

  2. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false What will happen if a tribe repeals its probate code? 18.111 Section 18.111 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR PROBATE TRIBAL PROBATE CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code?...

  3. A chemical reaction network solver for the astrophysics code NIRVANA

    NASA Astrophysics Data System (ADS)

    Ziegler, U.

    2016-02-01

    Context. Chemistry often plays an important role in astrophysical gases. It regulates thermal properties by changing species abundances and via ionization processes. This way, time-dependent cooling mechanisms and other chemistry-related energy sources can have a profound influence on the dynamical evolution of an astrophysical system. Modeling those effects with the underlying chemical kinetics in realistic magneto-gasdynamical simulations provide the basis for a better link to observations. Aims: The present work describes the implementation of a chemical reaction network solver into the magneto-gasdynamical code NIRVANA. For this purpose a multispecies structure is installed, and a new module for evolving the rate equations of chemical kinetics is developed and coupled to the dynamical part of the code. A small chemical network for a hydrogen-helium plasma was constructed including associated thermal processes which is used in test problems. Methods: Evolving a chemical network within time-dependent simulations requires the additional solution of a set of coupled advection-reaction equations for species and gas temperature. Second-order Strang-splitting is used to separate the advection part from the reaction part. The ordinary differential equation (ODE) system representing the reaction part is solved with a fourth-order generalized Runge-Kutta method applicable for stiff systems inherent to astrochemistry. Results: A series of tests was performed in order to check the correctness of numerical and technical implementation. Tests include well-known stiff ODE problems from the mathematical literature in order to confirm accuracy properties of the solver used as well as problems combining gasdynamics and chemistry. Overall, very satisfactory results are achieved. Conclusions: The NIRVANA code is now ready to handle astrochemical processes in time-dependent simulations. An easy-to-use interface allows implementation of complex networks including thermal processes

  4. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716

  5. A reference manual for the Event Progression Analysis Code (EVNTRE)

    SciTech Connect

    Griesmeyer, J.M.; Smith, L.N.

    1989-09-01

    This document is a reference guide for the Event Progression Analysis (EVNTRE) code developed at Sandia National Laboratories. EVNTRE is designed to process the large accident progression event trees and associated files used in probabilistic risk analyses for nuclear power plants. However, the general nature of EVNTRE makes it applicable to a wide variety of analyses that involve the investigation of a progression of events which lead to a large number of sets of conditions or scenarios. The EVNTRE code efficiently processes large, complex event trees. It has the capability to assign probabilities to event tree branch points in several different ways, to classify pathways or outcomes into user-specified groupings, and to sample input distributions of probabilities and parameters.

  6. Petascale electronic structure code with a new parallel eigensolver

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Li, Yan; Kelley, Ct; Bernholc, Jerzy

    2015-03-01

    We describe recent developments within the Real Space Multigrid (RMG) electronic structure code. RMG uses real-space grids, a multigrid pre-conditioner, and subspace diagonalization to solve the Kohn-Sham equations. It is designed for use on massively parallel computers and has shown excellent scalability and performance, reaching 6.5 PFLOPS on 18k Cray compute nodes with 288k CPU cores and 18k GPUs. For large problems, the diagonalization becomes computationally dominant and a novel, highly parallel eigensolver was developed that makes efficient use of a large number of nodes. Test results for a range of problem sizes are presented, which execute up to 3.5 times faster than standard eigensolvers such as Scalapack. RMG is now an open source code, running on Linux, Windows and MacIntosh systems. It may be downloaded at .

  7. A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics

    Energy Science and Technology Software Center (ESTSC)

    2006-04-01

    Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Becausemore » it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.« less

  8. RHALE: A 3-D MMALE code for unstructured grids

    SciTech Connect

    Peery, J.S.; Budge, K.G.; Wong, M.K.W.; Trucano, T.G.

    1993-08-01

    This paper describes RHALE, a multi-material arbitrary Lagrangian-Eulerian (MMALE) shock physics code. RHALE is the successor to CTH, Sandia`s 3-D Eulerian shock physics code, and will be capable of solving problems that CTH cannot adequately address. We discuss the Lagrangian solid mechanics capabilities of RHALE, which include arbitrary mesh connectivity, superior artificial viscosity, and improved material models. We discuss the MMALE algorithms that have been extended for arbitrary grids in both two- and three-dimensions. The MMALE addition to RHALE provides the accuracy of a Lagrangian code while allowing a calculation to proceed under very large material distortions. Coupling an arbitrary quadrilateral or hexahedral grid to the MMALE solution facilitates modeling of complex shapes with a greatly reduced number of computational cells. RHALE allows regions of a problem to be modeled with Lagrangian, Eulerian or ALE meshes. In addition, regions can switch from Lagrangian to ALE to Eulerian based on user input or mesh distortion. For ALE meshes, new node locations are determined with a variety of element based equipotential schemes. Element quantities are advected with donor, van Leer, or Super-B algorithms. Nodal quantities are advected with the second order SHALE or HIS algorithms. Material interfaces are determined with a modified Young`s high resolution interface tracker or the SLIC algorithm. RHALE has been used to model many problems of interest to the mechanics, hypervelocity impact, and shock physics communities. Results of a sampling of these problems are presented in this paper.

  9. Modulation and coding used by a major satellite communications company

    NASA Technical Reports Server (NTRS)

    Renshaw, K. H.

    1992-01-01

    Hughes Communications Inc., is a major satellite communications company providing or planning to provide the full spectrum of services available on satellites. All of the current services use conventional modulation and coding techniques that were well known a decade or longer ago. However, the future mobile satellite service will use significantly more advanced techniques. JPL, under NASA sponsorship, has pioneered many of the techniques that will be used.

  10. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Office of the Federal Register pursuant to 5 U.S.C. 552(a) and 1 CFR part 51. NRC Regulatory Guide 1.84... pursuant to 5 U.S.C. 552(a) and 1 CFR part 51. These RGs list ASME Code cases that the NRC has approved in..., call 202-741-6030, or go to: http://www.archives.gov/federal-register/cfr/ibr-locations.html. (1)...

  11. Coding strategies for a single-channel tactile aid.

    PubMed

    Summers, I R; Farr, J

    1989-11-01

    Measurements have been made on the ability of normally hearing subjects to identify the stressed word in simple sentences, using only tactile information. Vibrotactile stimuli were presented to the distal pad of the second finger via a single vibrator. A range of coding strategies was investigated, voice pitch or speech amplitude being represented as stimulus frequency and/or intensity. Test results show that, even without specific training, subjects can be quite successful in identifying stress patterns. The most effective coding strategies were (i) voice frequency presented as continuously variable stimulus frequency over the range 40-220 Hz, with a correlated modulation of stimulus amplitude, (ii) speech amplitude presented as two discrete levels of stimulus amplitude. PMID:2605382

  12. Failure criteria used in a probabilistic fracture mechanics code

    SciTech Connect

    Lo, T.Y.

    1985-01-01

    Two criteria are implemented in a piping reliability analysis code to assess the stability of crack growth in pipes. One is the critical net section stress criterion. It is simple and convenient but its application is limited to very ductile materials. The other is the tearing modulus stability criterion. This criterion has a solid technical base. However, calculating the J-integral, J, and the associated tearing modulus, T, usually requires a complicated finite element method (FEM). In this piping reliability code, existing J and T solutions in tabular or formula form instead of the FEM are used for computational efficiency. These two failure criteria are discussed and compared in terms of their effects on the estimation of pipe failure probability. 5 refs., 9 figs.

  13. Universal transversal gates with color codes: A simplified approach

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Beverland, Michael E.

    2015-03-01

    We provide a simplified yet rigorous presentation of the ideas from Bombín's paper (arXiv:1311.0879v3). Our presentation is self-contained, and assumes only basic concepts from quantum error correction. We provide an explicit construction of a family of color codes in arbitrary dimensions and describe some of their crucial properties. Within this framework, we explicitly show how to transversally implement the generalized phase gate Rn=diag(1 ,e2 π i /2n) , which deviates from the method in the aforementioned paper, allowing an arguably simpler proof. We describe how to implement the Hadamard gate H fault tolerantly using code switching. In three dimensions, this yields, together with the transversal controlled-not (cnot), a fault-tolerant universal gate set {H ,c n o t ,R3} without state distillation.

  14. Development of a massively parallel parachute performance prediction code

    SciTech Connect

    Peterson, C.W.; Strickland, J.H.; Wolfe, W.P.; Sundberg, W.D.; McBride, D.D.

    1997-04-01

    The Department of Energy has given Sandia full responsibility for the complete life cycle (cradle to grave) of all nuclear weapon parachutes. Sandia National Laboratories is initiating development of a complete numerical simulation of parachute performance, beginning with parachute deployment and continuing through inflation and steady state descent. The purpose of the parachute performance code is to predict the performance of stockpile weapon parachutes as these parachutes continue to age well beyond their intended service life. A new massively parallel computer will provide unprecedented speed and memory for solving this complex problem, and new software will be written to treat the coupled fluid, structure and trajectory calculations as part of a single code. Verification and validation experiments have been proposed to provide the necessary confidence in the computations.

  15. A signature of neural coding at human perceptual limits.

    PubMed

    Bays, Paul M

    2016-09-01

    Simple visual features, such as orientation, are thought to be represented in the spiking of visual neurons using population codes. I show that optimal decoding of such activity predicts characteristic deviations from the normal distribution of errors at low gains. Examining human perception of orientation stimuli, I show that these predicted deviations are present at near-threshold levels of contrast. The findings may provide a neural-level explanation for the appearance of a threshold in perceptual awareness whereby stimuli are categorized as seen or unseen. As well as varying in error magnitude, perceptual judgments differ in certainty about what was observed. I demonstrate that variations in the total spiking activity of a neural population can account for the empirical relationship between subjective confidence and precision. These results establish population coding and decoding as the neural basis of perception and perceptual confidence. PMID:27604067

  16. A new neutron energy spectrum unfolding code using a two steps genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Hosseini, S. A.; Sohrabpour, M.

    2016-03-01

    A new neutron spectrum unfolding code TGASU (Two-steps Genetic Algorithm Spectrum Unfolding) has been developed to unfold the neutron spectrum from a pulse height distribution which was calculated using the MCNPX-ESUT computational Monte Carlo code. To perform the unfolding process, the response matrices were generated using the MCNPX-ESUT computational code. Both one step (common GA) and two steps GAs have been implemented to unfold the neutron spectra. According to the obtained results, the new two steps GA code results has shown closer match in all energy regions and particularly in the high energy regions. The results of the TGASU code have been compared with those of the standard spectra, LSQR method and GAMCD code. The results of the TGASU code have been demonstrated to be more accurate than that of the existing computational codes for both under-determined and over-determined problems.

  17. A user guide for the EMTAC-MZ CFD code

    NASA Technical Reports Server (NTRS)

    Szema, Kuo-Yen; Chakravarthy, Sukumar R.

    1990-01-01

    The computer code (EMTAC-MZ) was applied to investigate the flow field over a variety of very complex three-dimensional (3-D) configurations across the Mach number range (subsonic, transonic, supersonic, and hypersonic flow). In the code, a finite volume, multizone implementation of high accuracy, total variation diminishing (TVD) formulation (based on Roe's scheme) is used to solve the unsteady Euler equations. In the supersonic regions of the flow, an infinitely large time step and a space-marching scheme is employed. A finite time step and a relaxation or 3-D approximate factorization method is used in subsonic flow regions. The multizone technique allows very complicated configurations to be modeled without geometry modifications, and can easily handle combined internal and external flow problems. An elliptic grid generation package is built into the EMTAC-MZ code. To generate the computational grid, only the surface geometry data are required. Results obtained for a variety of configurations, such as fighter-like configurations (F-14, AVSTOL), flow through inlet, multi-bodies (shuttle with external tank and SRBs), are reported and shown to be in good agreement with available experimental data.

  18. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  19. Development of a coded 16-ary CPFSK coherent demodulator

    NASA Technical Reports Server (NTRS)

    Clarke, Ken; Davis, Robert; Roesch, Jim

    1988-01-01

    Theory and hardware are described for a proof-of-concept 16-ary continuous phase frequency shift keying (16-CPFSK) digital modem. The 16 frequencies are spaced every 1/16th baud rate for 2 bits/sec/Hz operation. Overall rate 3/4 convolutional coding is incorporated. The demodulator differs significantly from typical quadrature phase detector approaches in that phase is coherently measured by processing the baseband output of a frequency discriminator. Baud rate phase samples from the baseband processor are decoded to yield the original data stream. The method of encoding onto the 16-ary phase nodes, together with convolutional coding gain, results in near quad PSK (QPSK) performance. The modulated signal is of constant envelope; thus the power amplifier can be saturated for peak performance. The spectrum is inherently bandlimited and requires no RF filter.

  20. E coding: a missing link for injury prevention.

    PubMed

    Halpern, J S

    1993-06-01

    E codes are a practical, detailed, and feasible method of collecting much needed information about trauma morbidity. Emergency care providers are the key to ensuring accurate information because of their ability to obtain specific information from prehospital personnel or family. Charting the information on the emergency record will simplify the task for medical records coders, researchers, epidemiologists, and public health officials. The detailed information used for E codes is not just research trivia, but rather beneficial information for all emergency providers. A specific plan of care can be developed to address the medical and social needs of each patient. This in turn may help to reduce future injuries, whether they are caused by high-risk behaviors or repetitive abuse situations. PMID:8510363

  1. A framework for control simulations using the TRANSP code

    NASA Astrophysics Data System (ADS)

    Boyer, Mark D.; Andre, Rob; Gates, David; Gerhardt, Stefan; Goumiri, Imene; Menard, Jon

    2014-10-01

    The high-performance operational goals of present-day and future tokamaks will require development of advanced feedback control algorithms. Though reduced models are often used for initial designs, it is important to study the performance of control schemes with integrated models prior to experimental implementation. To this end, a flexible framework for closed loop simulations within the TRANSP code is being developed. The framework exploits many of the predictive capabilities of TRANSP and provides a means for performing control calculations based on user-supplied data (controller matrices, target waveforms, etc.). These calculations, along with the acquisition of ``real-time'' measurements and manipulation of TRANSP internal variables based on actuator requests, are implemented through a hook that allows custom run-specific code to be inserted into the standard TRANSP source code. As part of the framework, a module has been created to constrain the thermal stored energy in TRANSP using a confinement scaling expression. Progress towards feedback control of the current profile on NSTX-U will be presented to demonstrate the framework. Supported in part by an appointment to the U.S. Department of Energy Fusion Energy Postdoctoral Research Program administered by the Oak Ridge Institute for Science and Education.

  2. Two-Sided Coded Aperture Imaging Without a Detector Plane

    SciTech Connect

    Ziock, Klaus-Peter; Cunningham, Mark F; Fabris, Lorenzo

    2009-01-01

    We introduce a novel design for a two-sided, coded-aperture, gamma-ray imager suitable for use in stand off detection of orphan radioactive sources. The design is an extension of an active-mask imager that would have three active planes of detector material, a central plane acting as the detector for two (active) coded-aperture mask planes, one on either side of the detector plane. In the new design the central plane is removed and the mask on the left (right) serves as the detector plane for the mask on the right (left). This design reduces the size, mass, complexity, and cost of the overall instrument. In addition, if one has fully position-sensitive detectors, then one can use the two planes as a classic Compton camera. This enhances the instrument's sensitivity at higher energies where the coded-aperture efficiency is decreased by mask penetration. A plausible design for the system is found and explored with Monte Carlo simulations.

  3. BLSTA: A boundary layer code for stability analysis

    NASA Technical Reports Server (NTRS)

    Wie, Yong-Sun

    1992-01-01

    A computer program is developed to solve the compressible, laminar boundary-layer equations for two-dimensional flow, axisymmetric flow, and quasi-three-dimensional flows including the flow along the plane of symmetry, flow along the leading-edge attachment line, and swept-wing flows with a conical flow approximation. The finite-difference numerical procedure used to solve the governing equations is second-order accurate. The flow over a wide range of speed, from subsonic to hypersonic speed with perfect gas assumption, can be calculated. Various wall boundary conditions, such as wall suction or blowing and hot or cold walls, can be applied. The results indicate that this boundary-layer code gives velocity and temperature profiles which are accurate, smooth, and continuous through the first and second normal derivatives. The code presented herein can be coupled with a stability analysis code and used to predict the onset of the boundary-layer transition which enables the assessment of the laminar flow control techniques. A user's manual is also included.

  4. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  5. The Use of a Pseudo Noise Code for DIAL Lidar

    NASA Astrophysics Data System (ADS)

    Burris, J.; Sun, X.; Abshire, J. B.

    2010-12-01

    Retrievals of CO2 profiles within the planetary boundary layer (PBL) are required to understand CO2 transport over regional scales and for validating the future space borne CO2 remote sensing instrument, such as the CO2 Laser Sounder, for the ASCENDS mission. We report the use of a return-to- zero (RZ) pseudo noise (PN) code modulation technique for making range resolved measurements of CO2 within the PBL using commercial, off-the-shelf, components. Conventional, range resolved, measurements require laser pulse widths that are shorter than the desired spatial resolution and have pulse spacing such that returns from only a single pulse are observed by the receiver at one time (for the PBL pulse separations must be >~2000m). This imposes a serious limitation when using available fiber lasers because of the resulting low duty cycle (<0.001) and consequent low average laser output power. RZ PN code modulation enables a fiber laser to operate at much higher duty cycles (approaching 0.1) thereby more effectively utilizing the amplifier’s output. This results in an increase in received counts by approximately two orders of magnitude. The approach involves employing two, back to back, CW fiber amplifiers seeded at the appropriate on and offline CO2 wavelengths (~1572 nm) using distributed feedback diode lasers modulated by a PN code at rates significantly above 1 megahertz. An assessment of the technique, discussions of measurement precision and error sources as well as preliminary data will be presented.

  6. nMHDust: A 4-Fluid Partially Ionized Dusty Plasma Code

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel

    2008-11-01

    nMHDust is a next generation 4-fluid partially ionized magnetized dusty plasma code, treating the inertial dynamics of dust, ion and neutral components. Coded in ANSI C, the numerical method is based on the MHDust 3-fluid fully ionized dusty plasma code. This code expands the features of the MHDust code to include ionization/recombination effects and the netCDF data format. Tests of this code include: ionization instabilities, wave mode propagation (electromagnetic and acoustic), shear-flow instabilities, and magnetic reconnection. Relevant parameters for the space environment are considered, allowing a comparison to be made with previous dusty plasma codes (MHDust and DENISIS). The utility of the code is expanded through the possibility of a small dust mass. This allows nMHDust to be used as a 2-ion plasma code. nMHDust completes the array of fluid dusty plasma codes available for numerical investigations into nonlinear phenomena in the field of astrophysical dusty plasmas.

  7. Probability of undetected error after decoding for a concatenated coding scheme

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.

  8. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  9. Code System For Calculating Reactivity Transients In a LWR.

    Energy Science and Technology Software Center (ESTSC)

    1999-03-16

    Version 00 RETRANS is appropriate to calculate power excursions in light water reactors initiated by reactivity insertions due to withdrawal of control elements. The neutron physical model is based on the time-dependent two-group neutron diffusion equations. The equation of state of the coolant is approximated by a table built into the code. RETRANS solves the heat conduction equation and calculates the heat transfer coefficient for representative fuel rods at each time-step.

  10. RESRAD: A computer code for evaluating radioactively contaminated sites

    SciTech Connect

    Yu, C.; Zielen, A.J.; Cheng, J.J.

    1993-12-31

    This document briefly describes the uses of the RESRAD computer code in calculating site-specific residual radioactive material guidelines and radiation dose-risk to an on-site individual (worker or resident) at a radioactively contaminated site. The adoption by the DOE in order 5400.5, pathway analysis methods, computer requirements, data display, the inclusion of chemical contaminants, benchmarking efforts, and supplemental information sources are all described. (GHH)

  11. A Radiation Solver for the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    2015-01-01

    A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.

  12. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning

    PubMed Central

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583

  13. DANTSYS: A diffusion accelerated neutral particle transport code system

    SciTech Connect

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  14. A Test on a Bilingual Dual Coding Hypothesis in Japanese-English Bilinguals.

    ERIC Educational Resources Information Center

    Taura, Hideyuki

    A study investigated the effects of second language (L2) acquisition age, length of L2 exposure, and gender on bilingual coding, and examined whether the bilingual dual coding effect in incidental recalls would be the same as in Indo-European languages. The bilingual dual coding hypothesis proposes that the individual's image system and the two…

  15. A user's manual for Electromagnetic Surface Patch (ESP) code. Version 2: Polygonal plates and wires

    NASA Astrophysics Data System (ADS)

    Newman, E. H.; Alexandropoulos, P.

    1983-09-01

    This report serves as a user's manual for the Electromagnetic Surface Patch (ESP) Code. This code is a method of moments solution for interconnections of thin wires and polygonal plates. The code can compute currents, input impedance, efficiency, mutual coupling, and far-zone radiation and scattering patterns. In addition to describing the code input and output, the use of the code is illustrated by simple examples. Subroutine descriptions are also given.

  16. Development of a New Class of Zero Cross Correlation Codes for Optical CDMA Systems

    NASA Astrophysics Data System (ADS)

    Rashidi, Che Bin Mohd; Aljunid, S. A.; Ghani, F.; Anuar, M. S.

    2012-03-01

    The paper presents a method for the development of a new class of zero cross correlation optical code for Optical Code Division Multiple Access (OCDMA) system using Spectral Amplitude Coding. The proposed code is called Modified Zero Cross Correlation Code (MZCC). The code has minimum length and can be constructed quite simply for any number of users and for any code weights. The code has better spectrum slicing properties and noise performance in term of Bit Error Rate. The Modified Zero Cross Correlation Code will be demonstrated in simulation using OptiSys. 6.0 to observe noise performance which is better as compared to the existing Zero Cross Correlation Code.

  17. DMC (Distinct Motion Code): A rigid body motion code for determining the interaction of multiple spherical particles

    SciTech Connect

    Taylor, L.M.; Preece, D.S.

    1989-07-01

    The computer program DMC (Distinct Motion Code) determines the two-dimensional planar rigid body motion of an arbitrary number of spherical shaped particles. The code uses an explicit central difference time integration algorithm to calculate the motion of the particles. Contact constraints between the particles are enforced using the penalty method. Coulomb friction and viscous damping are included in the collisions. The explicit time integration is conditionally stable with a time increment size which is dependent on the mass of the smallest particle in the mesh and the penalty stiffness used for the contact forces. The code chooses the spring stiffness based on the Young's modulus and Poisson's ratio of the material. The ability to tie spheres in pairs with a constraint condition is included in the code. The code has been written in an extremely efficient manner with particular emphasis placed on vector processing. While this does not impose any restrictions on non-vector processing computers, it does provide extremely fast results on vector processing computers. A bucket sorting or boxing algorithm is used to reduce the number of comparisons which must be made between spheres to determine the contact pairs. The sorting algorithm is completely algebraic and contains no logical branching. 13 refs., 14 figs., 4 tabs.

  18. DMC (Distinct Motion Code): A rigid body motion code for determining the interaction of multiple spherical particles

    NASA Astrophysics Data System (ADS)

    Taylor, L. M.; Preece, D. S.

    1989-07-01

    The computer program Distinct Motion Code (DMC) determines the two-dimensional planar rigid body motion of an arbitrary number of spherical shaped particles. The code uses an explicit central difference time integration algorithm to calculate the motion of the particles. Contact constraints between the particles are enforced using the penalty method. Coulomb friction and viscous damping are included in the collisions. The explicit time integration is conditionally stable with a time increment size which is dependent on the mass of the smallest particle in the mesh and the penalty stiffness used for the contact forces. The code chooses the spring stiffness based on the Young's modulus and Poisson's ratio of the material. The ability to tie spheres in pairs with a constraint condition is included in the code. The code has been written in an extremely efficient manner with particular emphasis placed on vector processing. While this does not impose any restrictions on non-vector processing computers, it does provide extremely fast results on vector processing computers. A bucket sorting or boxing algorithm is used to reduce the number of comparisons which must be made between spheres to determine the contact pairs. The sorting algorithm is completely algebraic and contains no logical branching.

  19. Securing optical code-division multiple-access networks with a postswitching coding scheme of signature reconfiguration

    NASA Astrophysics Data System (ADS)

    Huang, Jen-Fa; Meng, Sheng-Hui; Lin, Ying-Chen

    2014-11-01

    The optical code-division multiple-access (OCDMA) technique is considered a good candidate for providing optical layer security. An enhanced OCDMA network security mechanism with a pseudonoise (PN) random digital signals type of maximal-length sequence (M-sequence) code switching to protect against eavesdropping is presented. Signature codes unique to individual OCDMA-network users are reconfigured according to the register state of the controlling electrical shift registers. Examples of signature reconfiguration following state switching of the controlling shift register for both the network user and the eavesdropper are numerically illustrated. Dynamically changing the PN state of the shift register to reconfigure the user signature sequence is shown; this hinders eavesdroppers' efforts to decode correct data sequences. The proposed scheme increases the probability of eavesdroppers committing errors in decoding and thereby substantially enhances the degree of an OCDMA network's confidentiality.

  20. Development of a fan model for the CONTAIN code

    SciTech Connect

    Pevey, R.E.

    1987-01-08

    A fan model has been added to the CONTAIN code with a minimum of disruption of the standard CONTAIN calculation sequence. The user is required to supply a simple pressure vs. flow rate curve for each fan in his model configuration. Inclusion of the fan model required modification to two CONTAIN subroutines, IFLOW and EXEQNX. The two modified routines and the resulting executable module are located on the LANL mass storage system as /560007/iflow, /560007/exeqnx, and /560007/cont01, respectively. The model has been initially validated using a very simple sample problem and is ready for a more complete workout using the SRP reactor models from the RSRD probabilistic risk analysis.

  1. A LONE code for the sparse control of quantum systems

    NASA Astrophysics Data System (ADS)

    Ciaramella, G.; Borzì, A.

    2016-03-01

    In many applications with quantum spin systems, control functions with a sparse and pulse-shaped structure are often required. These controls can be obtained by solving quantum optimal control problems with L1-penalized cost functionals. In this paper, the MATLAB package LONE is presented aimed to solving L1-penalized optimal control problems governed by unitary-operator quantum spin models. This package implements a new strategy that includes a globalized semi-smooth Krylov-Newton scheme and a continuation procedure. Results of numerical experiments demonstrate the ability of the LONE code in computing accurate sparse optimal control solutions.

  2. Sonic boom predictions using a modified Euler code

    NASA Astrophysics Data System (ADS)

    Siclari, Michael J.

    1992-04-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  3. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  4. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  5. Improving the Capabilities of a Continuum Laser Plasma Interaction Code

    SciTech Connect

    Hittinger, J F; Dorr, M R

    2006-06-15

    The numerical simulation of plasmas is a critical tool for inertial confinement fusion (ICF). We have been working to improve the predictive capability of a continuum laser plasma interaction code pF3d, which couples a continuum hydrodynamic model of an unmagnetized plasma to paraxial wave equations modeling the laser light. Advanced numerical techniques such as local mesh refinement, multigrid, and multifluid Godunov methods have been adapted and applied to nonlinear heat conduction and to multifluid plasma models. We describe these algorithms and briefly demonstrate their capabilities.

  6. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  7. Home energy ratings and energy codes -- A marriage that should work

    SciTech Connect

    Verdict, M.E.; Fairey, P.W.; DeWein, M.C.

    1998-07-01

    This paper examines how voluntary home energy ratings systems (HERS) can be married to mandatory energy codes to increase code compliance while providing added benefits to consumers, builders, and code officials. Effective code enforcement and compliance is a common problem for state and local jurisdictions attempting to reduce energy consumption and increase housing affordability. Reasons frequently cited for energy code noncompliance are: (1) builder resistance to government regulations and change in building practices; (2) the perceived complexity of the code; (3) a lack of familiarity of energy impacts by cod officials and the housing industry, and (4) inadequate government resources for enforcement. By combing ratings and codes, one can create a win-win approach for code officials and energy rating organizations, the housing industry, as well as consumers who wish to reduce air pollution and energy waste. Additionally, state and local government experiences where the marriage between codes and ratings has begun are highlighted and the barriers and benefits assessed.

  8. A Simple Histone Code Opens Many Paths to Epigenetics

    PubMed Central

    Sneppen, Kim; Dodd, Ian B.

    2012-01-01

    Nucleosomes can be covalently modified by addition of various chemical groups on several of their exposed histone amino acids. These modifications are added and removed by enzymes (writers) and can be recognized by nucleosome-binding proteins (readers). Linking a reader domain and a writer domain that recognize and create the same modification state should allow nucleosomes in a particular modification state to recruit enzymes that create that modification state on nearby nucleosomes. This positive feedback has the potential to provide the alternative stable and heritable states required for epigenetic memory. However, analysis of simple histone codes involving interconversions between only two or three types of modified nucleosomes has revealed only a few circuit designs that allow heritable bistability. Here we show by computer simulations that a histone code involving alternative modifications at two histone positions, producing four modification states, combined with reader-writer proteins able to distinguish these states, allows for hundreds of different circuits capable of heritable bistability. These expanded possibilities result from multiple ways of generating two-step cooperativity in the positive feedback - through alternative pathways and an additional, novel cooperativity motif. Our analysis reveals other properties of such epigenetic circuits. They are most robust when the dominant nucleosome types are different at both modification positions and are not the type inserted after DNA replication. The dominant nucleosome types often recruit enzymes that create their own type or destroy the opposing type, but never catalyze their own destruction. The circuits appear to be evolutionary accessible; most circuits can be changed stepwise into almost any other circuit without losing heritable bistability. Thus, our analysis indicates that systems that utilize an expanded histone code have huge potential for generating stable and heritable nucleosome

  9. A simple histone code opens many paths to epigenetics.

    PubMed

    Sneppen, Kim; Dodd, Ian B

    2012-01-01

    Nucleosomes can be covalently modified by addition of various chemical groups on several of their exposed histone amino acids. These modifications are added and removed by enzymes (writers) and can be recognized by nucleosome-binding proteins (readers). Linking a reader domain and a writer domain that recognize and create the same modification state should allow nucleosomes in a particular modification state to recruit enzymes that create that modification state on nearby nucleosomes. This positive feedback has the potential to provide the alternative stable and heritable states required for epigenetic memory. However, analysis of simple histone codes involving interconversions between only two or three types of modified nucleosomes has revealed only a few circuit designs that allow heritable bistability. Here we show by computer simulations that a histone code involving alternative modifications at two histone positions, producing four modification states, combined with reader-writer proteins able to distinguish these states, allows for hundreds of different circuits capable of heritable bistability. These expanded possibilities result from multiple ways of generating two-step cooperativity in the positive feedback--through alternative pathways and an additional, novel cooperativity motif. Our analysis reveals other properties of such epigenetic circuits. They are most robust when the dominant nucleosome types are different at both modification positions and are not the type inserted after DNA replication. The dominant nucleosome types often recruit enzymes that create their own type or destroy the opposing type, but never catalyze their own destruction. The circuits appear to be evolutionary accessible; most circuits can be changed stepwise into almost any other circuit without losing heritable bistability. Thus, our analysis indicates that systems that utilize an expanded histone code have huge potential for generating stable and heritable nucleosome

  10. The Use of a Pseudo Noise Code for DIAL Lidar

    NASA Technical Reports Server (NTRS)

    Burris, John F.

    2010-01-01

    Retrievals of CO2 profiles within the planetary boundary layer (PBL) are required to understand CO2 transport over regional scales and for validating the future space borne CO2 remote sensing instrument, such as the CO2 Laser Sounder, for the ASCENDS mission, We report the use of a return-to-zero (RZ) pseudo noise (PN) code modulation technique for making range resolved measurements of CO2 within the PBL using commercial, off-the-shelf, components. Conventional, range resolved, measurements require laser pulse widths that are s#rorter than the desired spatial resolution and have pulse spacing such that returns from only a single pulse are observed by the receiver at one time (for the PBL pulse separations must be greater than approximately 2000m). This imposes a serious limitation when using available fiber lasers because of the resulting low duty cycle (less than 0.001) and consequent low average laser output power. RZ PN code modulation enables a fiber laser to operate at much higher duty cycles (approaching 0.1) thereby more effectively utilizing the amplifier's output. This results in an increase in received counts by approximately two orders of magnitude. The approach involves employing two, back to back, CW fiber amplifiers seeded at the appropriate on and offline CO2 wavelengths (approximately 1572 nm) using distributed feedback diode lasers modulated by a PN code at rates significantly above 1 megahertz. An assessment of the technique, discussions of measurement precision and error sources as well as preliminary data will be presented.

  11. NMACA Approach Used to Build a Secure Message Authentication Code

    NASA Astrophysics Data System (ADS)

    Alosaimy, Raed; Alghathbar, Khaled; Hafez, Alaaeldin M.; Eldefrawy, Mohamed H.

    Secure storage systems should consider the integrity and authentication of long-term stored information. When information is transferred through communication channels, different types of digital information can be represented, such as documents, images, and database tables. The authenticity of such information must be verified, especially when it is transferred through communication channels. Authentication verification techniques are used to verify that the information in an archive is authentic and has not been intentionally or maliciously altered. In addition to detecting malicious attacks, verifying the integrity also identifies data corruption. The purpose of Message Authentication Code (MAC) is to authenticate messages, where MAC algorithms are keyed hash functions. In most cases, MAC techniques use iterated hash functions, and these techniques are called iterated MACs. Such techniques usually use a MAC key as an input to the compression function, and this key is involved in the compression function, f, at every stage. Modification detection codes (MDCs) are un-keyed hash functions, and are widely used by authentication techniques such as MD4, MD5, SHA-1, and RIPEMD-160. There have been new attacks on hash functions such as MD5 and SHA-1, which requires the introduction of more secure hash functions. In this paper, we introduce a new MAC methodology that uses an input MAC key in the compression function, to change the order of the message words and shifting operation in the compression function. The new methodology can be used in conjunction with a wide range of modification detection code techniques. Using the SHA-1 algorithm as a model, a new (SHA-1)-MAC algorithm is presented. The (SHA-1)-MAC algorithm uses the MAC key to build the hash functions by defining the order for accessing source words and defining the number of bit positions for circular left shifts.

  12. ICAN: A versatile code for predicting composite properties

    NASA Technical Reports Server (NTRS)

    Ginty, C. A.; Chamis, C. C.

    1986-01-01

    The Integrated Composites ANalyzer (ICAN), a stand-alone computer code, incorporates micromechanics equations and laminate theory to analyze/design multilayered fiber composite structures. Procedures for both the implementation of new data in ICAN and the selection of appropriate measured data are summarized for: (1) composite systems subject to severe thermal environments; (2) woven fabric/cloth composites; and (3) the selection of new composite systems including those made from high strain-to-fracture fibers. The comparisons demonstrate the versatility of ICAN as a reliable method for determining composite properties suitable for preliminary design.

  13. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  14. ELLIPT2D: A Flexible Finite Element Code Written Python

    SciTech Connect

    Pletzer, A.; Mollis, J.C.

    2001-03-22

    The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research.

  15. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    SciTech Connect

    Thuc Bui

    2007-12-06

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  16. Torus mapper: a code for dynamical models of galaxies

    NASA Astrophysics Data System (ADS)

    Binney, James; McMillan, Paul J.

    2016-02-01

    We present a freely downloadable software package for modelling the dynamics of galaxies, which we call the Torus Mapper (TM). The package is based around `torus mapping', which is a non-perturbative technique for creating orbital tori for specified values of the action integrals. Given an orbital torus and a star's position at a reference time, one can compute its position at any other time, no matter how remote. One can also compute the velocities with which the star will pass through any given point and the contribution it will make to the time-averaged density there. A system of angle-action coordinates for the given potential can be created by foliating phase space with orbital tori. Such a foliation is facilitated by the ability of TM to create tori by interpolating on a grid of tori. We summarize the advantages of using TM rather than a standard time-stepper to create orbits, and give segments of code that illustrate applications of TM in several contexts, including setting up initial conditions for an N-body simulation. We examine the precision of the orbital tori created by TM and the behaviour of the code when orbits become trapped by a resonance.

  17. A computer code for performance of spur gears

    NASA Technical Reports Server (NTRS)

    Wang, K. L.; Cheng, H. S.

    1983-01-01

    In spur gears both performance and failure predictions are known to be strongly dependent on the variation of load, lubricant film thickness, and total flash or contact temperature of the contacting point as it moves along the contact path. The need of an accurate tool for predicting these variables has prompted the development of a computer code based on recent findings in EHL and on finite element methods. The analyses and some typical results which to illustrate effects of gear geometry, velocity, load, lubricant viscosity, and surface convective heat transfer coefficient on the performance of spur gears are analyzed.

  18. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Palumbo, A.; Herman, M.; Capote, R.

    2014-06-01

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  19. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    SciTech Connect

    Palumbo, A.; Herman, M.; Capote, R.

    2014-06-15

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  20. Temporal perceptual coding using a visual acuity model

    NASA Astrophysics Data System (ADS)

    Adzic, Velibor; Cohen, Robert A.; Vetro, Anthony

    2014-02-01

    This paper describes research and results in which a visual acuity (VA) model of the human visual system (HVS) is used to reduce the bitrate of coded video sequences, by eliminating the need to signal transform coefficients when their corresponding frequencies will not be detected by the HVS. The VA model is integrated into the state of the art HEVC HM codec. Compared to the unmodified codec, up to 45% bitrate savings are achieved while maintaining the same subjective quality of the video sequences. Encoding times are reduced as well.

  1. Non-coding RNAs and disease: the classical ncRNAs make a comeback.

    PubMed

    de Almeida, Rogerio Alves; Fraczek, Marcin G; Parker, Steven; Delneri, Daniela; O'Keefe, Raymond T

    2016-08-15

    Many human diseases have been attributed to mutation in the protein coding regions of the human genome. The protein coding portion of the human genome, however, is very small compared with the non-coding portion of the genome. As such, there are a disproportionate number of diseases attributed to the coding compared with the non-coding portion of the genome. It is now clear that the non-coding portion of the genome produces many functional non-coding RNAs and these RNAs are slowly being linked to human diseases. Here we discuss examples where mutation in classical non-coding RNAs have been attributed to human disease and identify the future potential for the non-coding portion of the genome in disease biology. PMID:27528754

  2. A non-coherent SAC-OCDMA system using extended quadratic congruence codes for two-code keying scheme in passive optical networks

    NASA Astrophysics Data System (ADS)

    Yeh, Bih-Chyun; Lin, Chieng-Hung

    2012-12-01

    In this paper, we propose a family of extended quadratic congruence codes for two-code keying (TCK) with the corresponding encoding/decoding architecture for passive optical networks (PONs) in spectral amplitude coding optical code division multiple access (OCDMA) systems. The proposed system can simultaneously eliminate multi-user interference (MUI) and further suppress phase-induced intensity noise (PIIN). We reduce the complexity of the encoding/decoding architecture of the optical line terminal reduced by exploiting arrayed waveguide gratings (AWGs) and the properties of the extended quadratic congruence codes (EQC codes). Moreover, we also design a deployment method to increase the number of simultaneous users. Our numerical results demonstrate that the proposed system outperforms the improved quadratic congruence codes (improved QC codes).

  3. Stacked codes: Universal fault-tolerant quantum computation in a two-dimensional layout

    NASA Astrophysics Data System (ADS)

    Jochym-O'Connor, Tomas; Bartlett, Stephen D.

    2016-02-01

    We introduce a class of three-dimensional color codes, which we call stacked codes, together with a fault-tolerant transformation that will map logical qubits encoded in two-dimensional (2D) color codes into stacked codes and back. The stacked code allows for the transversal implementation of a non-Clifford π /8 logical gate, which when combined with the logical Clifford gates that are transversal in the 2D color code give a gate set that is both fault-tolerant and universal without requiring nonstabilizer magic states. We then show that the layers forming the stacked code can be unfolded and arranged in a 2D layout. As only Clifford gates can be implemented transversally for 2D topological stabilizer codes, a nonlocal operation must be incorporated in order to allow for this transversal application of a non-Clifford gate. Our code achieves this operation through the transformation from a 2D color code to the unfolded stacked code induced by measuring only geometrically local stabilizers and gauge operators within the bulk of 2D color codes together with a nonlocal operator that has support on a one-dimensional boundary between such 2D codes. We believe that this proposed method to implement the nonlocal operation is a realistic one for 2D stabilizer layouts and would be beneficial in avoiding the large overheads caused by magic state distillation.

  4. Is a genome a codeword of an error-correcting code?

    PubMed

    Faria, Luzinete C B; Rocha, Andréa S L; Kleinschmidt, João H; Silva-Filho, Márcio C; Bim, Edson; Herai, Roberto H; Yamagishi, Michel E B; Palazzo, Reginaldo

    2012-01-01

    Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495

  5. Is a Genome a Codeword of an Error-Correcting Code?

    PubMed Central

    Kleinschmidt, João H.; Silva-Filho, Márcio C.; Bim, Edson; Herai, Roberto H.; Yamagishi, Michel E. B.; Palazzo, Reginaldo

    2012-01-01

    Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495

  6. The movement towards a more experimental approach to problem solving in mathematics using coding

    NASA Astrophysics Data System (ADS)

    Barichello, Leonardo

    2016-07-01

    Motivated by a problem proposed in a coding competition for secondary students, I will show on this paper how coding substantially changed the problem-solving process towards a more experimental approach.

  7. Coding of images by methods of a spline interpolation

    NASA Astrophysics Data System (ADS)

    Kozhemyako, Vladimir P.; Maidanuik, V. P.; Etokov, I. A.; Zhukov, Konstantin M.; Jorban, Saleh R.

    2000-06-01

    In the case of image coding are containing interpolation methods, a linear methods of component forming usually used. However, taking in account the huge speed increasing of a computer and hardware integration power, of special interest was more complicated interpolation methods, in particular spline interpolation. A spline interpolation is known to be a approximation that performed by spline, which consist of polynomial bounds, where a cub parabola usually used. At this article is to perform image analysis by 5 X 5 aperture, result in count rejection of low-frequence component of image: an one base count per 5 X 5 size fragment. The passed source counts were restoring by spline interpolation methods, then formed counts of high-frequence image component, by subtract from counts of initial image a low-frequence component and their quantization. At the final stage Huffman coding performed to divert of statistical redundancy. Spacious set of experiments with various images showed that source compression factor may be founded into limits of 10 - 70, which for majority test images are superlative source compression factor by JPEG standard applications at the same image quality. Investigated research show that spline approximation allow to improve restored image quality and compression factor to compare with linear interpolation. Encoding program modules has work out for BMP-format files, on the Windows and MS-DOS platforms.

  8. Investigation of a panel code for airframe/propeller integration analyses

    NASA Technical Reports Server (NTRS)

    Miley, S. J.

    1982-01-01

    The Hess panel code was investigated as a procedure to predict the aerodynamic loading associated with propeller slipstream interference on the airframe. The slipstream was modeled as a variable onset flow to the lifting and nonlifting bodies treated by the code. Four sets of experimental data were used for comparisons with the code. The results indicate that the Hess code, in its present form, will give valid solutions for nonuniform onset flows which vary in direction only. The code presently gives incorrect solutions for flows with variations in velocity. Modifications to the code to correct this are discussed.

  9. A model for non-monotonic intensity coding

    PubMed Central

    Nehrkorn, Johannes; Tanimoto, Hiromu; Herz, Andreas V. M.; Yarali, Ayse

    2015-01-01

    Peripheral neurons of most sensory systems increase their response with increasing stimulus intensity. Behavioural responses, however, can be specific to some intermediate intensity level whose particular value might be innate or associatively learned. Learning such a preference requires an adjustable trans- formation from a monotonic stimulus representation at the sensory periphery to a non-monotonic representation for the motor command. How do neural systems accomplish this task? We tackle this general question focusing on odour-intensity learning in the fruit fly, whose first- and second-order olfactory neurons show monotonic stimulus–response curves. Nevertheless, flies form associative memories specific to particular trained odour intensities. Thus, downstream of the first two olfactory processing layers, odour intensity must be re-coded to enable intensity-specific associative learning. We present a minimal, feed-forward, three-layer circuit, which implements the required transformation by combining excitation, inhibition, and, as a decisive third element, homeostatic plasticity. Key features of this circuit motif are consistent with the known architecture and physiology of the fly olfactory system, whereas alternative mechanisms are either not composed of simple, scalable building blocks or not compatible with physiological observations. The simplicity of the circuit and the robustness of its function under parameter changes make this computational motif an attractive candidate for tuneable non-monotonic intensity coding. PMID:26064666

  10. A model for non-monotonic intensity coding.

    PubMed

    Nehrkorn, Johannes; Tanimoto, Hiromu; Herz, Andreas V M; Yarali, Ayse

    2015-05-01

    Peripheral neurons of most sensory systems increase their response with increasing stimulus intensity. Behavioural responses, however, can be specific to some intermediate intensity level whose particular value might be innate or associatively learned. Learning such a preference requires an adjustable trans- formation from a monotonic stimulus representation at the sensory periphery to a non-monotonic representation for the motor command. How do neural systems accomplish this task? We tackle this general question focusing on odour-intensity learning in the fruit fly, whose first- and second-order olfactory neurons show monotonic stimulus-response curves. Nevertheless, flies form associative memories specific to particular trained odour intensities. Thus, downstream of the first two olfactory processing layers, odour intensity must be re-coded to enable intensity-specific associative learning. We present a minimal, feed-forward, three-layer circuit, which implements the required transformation by combining excitation, inhibition, and, as a decisive third element, homeostatic plasticity. Key features of this circuit motif are consistent with the known architecture and physiology of the fly olfactory system, whereas alternative mechanisms are either not composed of simple, scalable building blocks or not compatible with physiological observations. The simplicity of the circuit and the robustness of its function under parameter changes make this computational motif an attractive candidate for tuneable non-monotonic intensity coding. PMID:26064666

  11. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  12. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  13. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  14. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  15. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  16. Composing Data Parallel Code for a SPARQL Graph Engine

    SciTech Connect

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste; Haglin, David J.; Feo, John

    2013-09-08

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basic graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.

  17. A Network Coding Based Routing Protocol for Underwater Sensor Networks

    PubMed Central

    Wu, Huayang; Chen, Min; Guan, Xin

    2012-01-01

    Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime. PMID:22666045

  18. FARGO3D: A New GPU-oriented MHD Code

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Pablo; Masset, Frédéric S.

    2016-03-01

    We present the FARGO3D code, recently publicly released. It is a magnetohydrodynamics code developed with special emphasis on the physics of protoplanetary disks and planet-disk interactions, and parallelized with MPI. The hydrodynamics algorithms are based on finite-difference upwind, dimensionally split methods. The magnetohydrodynamics algorithms consist of the constrained transport method to preserve the divergence-free property of the magnetic field to machine accuracy, coupled to a method of characteristics for the evaluation of electromotive forces and Lorentz forces. Orbital advection is implemented, and an N-body solver is included to simulate planets or stars interacting with the gas. We present our implementation in detail and present a number of widely known tests for comparison purposes. One strength of FARGO3D is that it can run on either graphical processing units (GPUs) or central processing units (CPUs), achieving large speed-up with respect to CPU cores. We describe our implementation choices, which allow a user with no prior knowledge of GPU programming to develop new routines for CPUs, and have them translated automatically for GPUs.

  19. Development and Application of a Parallel MHD code

    NASA Astrophysics Data System (ADS)

    Peterkin, , Jr.

    1997-08-01

    Over the past few years, we (In collaboration with S. Colella, M. H. Frese, D. E. Lileikis and U. Shumlak.) have built a general purpose, portable, scalable three-dimensional finite volume magnetohydrodynamic code, called uc(mach3,) based on an arbitrary Lagrangian-Eulerian fluid algorithm to simulate time-dependent MHD phenomena for real materials. The physical domain of integration on which uc(mach3) works is decomposed into a patchwork of rectangular logical blocks that represent hexadedral physical subdomains. This block domain decomposition technique gives us a natural framework in which to implement coarse parallelization via message passing with the single program, multiple data (SPMD) model. Portability is achieved by using a parallel library that is separate from the physics code. At present, we are using the Message Passing Interface (MPI) because it is one of the industry standards, and because its Derived Data Type supports the sending and receiving of data with an arbitrary stride in memory. This feature is consistent with the manner in which boundary data is exchanged between connected block domains via ghost cells in the serial version of uc(mach3.) In this talk, we discuss the details of the uc(mach3) algorithm. In addition, we present results from some simple test problems as well as from complex 3-D time-dependent simulations including magnetoplasmadynamic thrusters, fast z-pinches, and magnetic flux compression generators.

  20. A memristive spiking neuron with firing rate coding

    PubMed Central

    Ignatov, Marina; Ziegler, Martin; Hansen, Mirko; Petraru, Adrian; Kohlstedt, Hermann

    2015-01-01

    Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2) and on the chemical electromigration cell Ag/TiO2−x/Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926. PMID:26539074

  1. Performance of a parallel thermal-hydraulics code TEMPEST

    SciTech Connect

    Fann, G.I.; Trent, D.S.

    1996-11-01

    The authors describe the parallelization of the Tempest thermal-hydraulics code. The serial version of this code is used for production quality 3-D thermal-hydraulics simulations. Good speedup was obtained with a parallel diagonally preconditioned BiCGStab non-symmetric linear solver, using a spatial domain decomposition approach for the semi-iterative pressure-based and mass-conserved algorithm. The test case used here to illustrate the performance of the BiCGStab solver is a 3-D natural convection problem modeled using finite volume discretization in cylindrical coordinates. The BiCGStab solver replaced the LSOR-ADI method for solving the pressure equation in TEMPEST. BiCGStab also solves the coupled thermal energy equation. Scaling performance of 3 problem sizes (221220 nodes, 358120 nodes, and 701220 nodes) are presented. These problems were run on 2 different parallel machines: IBM-SP and SGI PowerChallenge. The largest problem attains a speedup of 68 on an 128 processor IBM-SP. In real terms, this is over 34 times faster than the fastest serial production time using the LSOR-ADI solver.

  2. System for loading executable code into volatile memory in a downhole tool

    DOEpatents

    Hall, David R.; Bartholomew, David B.; Johnson, Monte L.

    2007-09-25

    A system for loading an executable code into volatile memory in a downhole tool string component comprises a surface control unit comprising executable code. An integrated downhole network comprises data transmission elements in communication with the surface control unit and the volatile memory. The executable code, stored in the surface control unit, is not permanently stored in the downhole tool string component. In a preferred embodiment of the present invention, the downhole tool string component comprises boot memory. In another embodiment, the executable code is an operating system executable code. Preferably, the volatile memory comprises random access memory (RAM). A method for loading executable code to volatile memory in a downhole tool string component comprises sending the code from the surface control unit to a processor in the downhole tool string component over the network. A central processing unit writes the executable code in the volatile memory.

  3. FRINK - A Code to Evaluate Space Reactor Transients

    SciTech Connect

    Poston, David I.; Marcille, Thomas F.; Dixon, David D.; Amiri, Benjamin W.

    2007-01-30

    One of the biggest needs for space reactor design and development is detailed system modeling. Most proposed space fission systems are very different from previously operated fission power systems, and extensive testing and modeling will be required to demonstrate integrated system performance. There are also some aspects of space reactors that make them unique from most terrestrial application, and require different modeling approaches. The Fission Reactor Integrated Nuclear Kinetics (FRINK) code was developed to evaluate simplified space reactor transients (note: the term ''space reactor'' inherently includes planetary and lunar surface reactors). FRINK is an integrated point kinetic/thermal-hydraulic transient analysis FORTRAN code - ''integrated'' refers to the simultaneous solution of the thermal and neutronic equations. In its current state FRINK is a very simple system model, perhaps better referred to as a reactor model. The ''system'' only extends to the primary loop power removal boundary condition; however this allows the simulation of simplified transients (e.g. loss of primary heat sink, loss of flow, large reactivity insertion, etc.), which are most important in bounding early system conceptual design. FRINK could then be added to a complete system model later in the design and development process as system design matures.

  4. CANTATAdb: A Collection of Plant Long Non-Coding RNAs

    PubMed Central

    Szcześniak, Michał W.; Rosikiewicz, Wojciech; Makałowska, Izabela

    2016-01-01

    Long non-coding RNAs (lncRNAs) represent a class of potent regulators of gene expression that are found in a wide array of eukaryotes; however, our knowledge about these molecules in plants is still very limited. In particular, a number of model plant species still lack comprehensive data sets of lncRNAs and their annotations, and very little is known about their biological roles. To meet these shortcomings, we created an online database of lncRNAs in 10 model plant species. The lncRNAs were identified computationally using dozens of publicly available RNA sequencing (RNA-Seq) libraries. Expression values, coding potential, sequence alignments as well as other types of data provide annotation for the identified lncRNAs. In order to better characterize them, we investigated their potential roles in splicing modulation and deregulation of microRNA functions. The data are freely available for searching, browsing and downloading from an online database called CANTATAdb (http://cantata.amu.edu.pl, http://yeti.amu.edu.pl/CANTATA/). PMID:26657895

  5. HD Photo: a new image coding technology for digital photography

    NASA Astrophysics Data System (ADS)

    Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.

    2007-09-01

    This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.

  6. FRINK - A Code to Evaluate Space Reactor Transients

    NASA Astrophysics Data System (ADS)

    Poston, David I.; Dixon, David D.; Marcille, Thomas F.; Amiri, Benjamin W.

    2007-01-01

    One of the biggest needs for space reactor design and development is detailed system modeling. Most proposed space fission systems are very different from previously operated fission power systems, and extensive testing and modeling will be required to demonstrate integrated system performance. There are also some aspects of space reactors that make them unique from most terrestrial application, and require different modeling approaches. The Fission Reactor Integrated Nuclear Kinetics (FRINK) code was developed to evaluate simplified space reactor transients (note: the term ``space reactor'' inherently includes planetary and lunar surface reactors). FRINK is an integrated point kinetic/thermal-hydraulic transient analysis FORTRAN code - ``integrated'' refers to the simultaneous solution of the thermal and neutronic equations. In its current state FRINK is a very simple system model, perhaps better referred to as a reactor model. The ``system'' only extends to the primary loop power removal boundary condition; however this allows the simulation of simplified transients (e.g. loss of primary heat sink, loss of flow, large reactivity insertion, etc.), which are most important in bounding early system conceptual design. FRINK could then be added to a complete system model later in the design and development process as system design matures.

  7. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  8. An upper bound for codes in a two-access binary erasure channel

    NASA Technical Reports Server (NTRS)

    Van Tilborg, H. C. A.

    1978-01-01

    A method for determining an upper bound for the size of a code for a two-access binary erasure channel is presented. For uniquely decodable codes, this bound gives a combinatorial proof of a result by Liao. Examples of the bound are given for codes with minimum distance 4.

  9. A co-designed equalization, modulation, and coding scheme

    NASA Technical Reports Server (NTRS)

    Peile, Robert E.

    1992-01-01

    The commercial impact and technical success of Trellis Coded Modulation seems to illustrate that, if Shannon's capacity is going to be neared, the modulation and coding of an analogue signal ought to be viewed as an integrated process. More recent work has focused on going beyond the gains obtained for Average White Gaussian Noise and has tried to combine the coding/modulation with adaptive equalization. The motive is to gain similar advances on less perfect or idealized channels.

  10. JAE: A Jupiter Atmospheric Entry Probe Heating Code

    NASA Technical Reports Server (NTRS)

    Wercinski, Paul F.; Tauber, Michael E.; Yang, Lily

    1997-01-01

    The strong gravitational attraction of Jupiter on probes approaching the planet results in very high atmospheric entry velocities. The values relative to the rotating atmosphere can vary from about 47 to 60 km/sec, depending on the latitude of the entry. Therefore, the peak heating rates and heat shield mass fractions exceed those for any other atmospheric entries. For example, the Galileo probe's heat shield mass fraction was 50%, of which 45% was devoted to the forebody. Although the Galileo probe's mission was very successful, many more scientific questions about the Jovian atmosphere remain to be answered and additional probe missions are being planned. Recent developments in microelectronics have raised the possibility of building smaller and less expensive probes than Galileo. Therefore, it was desirable to develop a code that could quickly compute the forebody entry heating environments when performing parametric probe sizing studies. The Jupiter Atmospheric Entry (JAE) code was developed to meet this requirement. The body geometry consists of a blunt-nosed conical shape of arbitrary nose and base radius and cone angles up to about 65 deg at zero angle of attack.

  11. Chemotopic Odorant Coding in a Mammalian Olfactory System

    PubMed Central

    Johnson, Brett A.; Leon, Michael

    2008-01-01

    Systematic mapping studies involving 365 odorant chemicals have shown that glomerular responses in the rat olfactory bulb are organized spatially in patterns that are related to the chemistry of the odorant stimuli. This organization involves the spatial clustering of principal responses to numerous odorants that share key aspects of chemistry such as functional groups, hydrocarbon structural elements, and/or overall molecular properties related to water solubility. In several of the clusters, responses shift progressively in position according to odorant carbon chain length. These response domains appear to be constructed from orderly projections of sensory neurons in the olfactory epithelium and may also involve chromatography across the nasal mucosa. The spatial clustering of glomerular responses may serve to “tune” the principal responses of bulbar projection neurons by way of inhibitory interneuronal networks, allowing the projection neurons to respond to a narrower range of stimuli than their associated sensory neurons. When glomerular activity patterns are viewed relative to the overall level of glomerular activation, the patterns accurately predict the perception of odor quality, thereby supporting the notion that spatial patterns of activity are the key factors underlying that aspect of the olfactory code. A critical analysis suggests that alternative coding mechanisms for odor quality, such as those based on temporal patterns of responses, enjoy little experimental support. PMID:17480025

  12. A symbiotic liaison between the genetic and epigenetic code.

    PubMed

    Heyn, Holger

    2014-01-01

    With rapid advances in sequencing technologies, we are undergoing a paradigm shift from hypothesis- to data-driven research. Genome-wide profiling efforts have given informative insights into biological processes; however, considering the wealth of variation, the major challenge still remains in their meaningful interpretation. In particular sequence variation in non-coding contexts is often challenging to interpret. Here, data integration approaches for the identification of functional genetic variability represent a possible solution. Exemplary, functional linkage analysis integrating genotype and expression data determined regulatory quantitative trait loci and proposed causal relationships. In addition to gene expression, epigenetic regulation and specifically DNA methylation was established as highly valuable surrogate mark for functional variance of the genetic code. Epigenetic modification has served as powerful mediator trait to elucidate mechanisms forming phenotypes in health and disease. Particularly, integrative studies of genetic and DNA methylation data have been able to guide interpretation strategies of risk genotypes, but also proved their value for physiological traits, such as natural human variation and aging. This Review seeks to illustrate the power of data integration in the genomic era exemplified by DNA methylation quantitative trait loci. However, the model is further extendable to virtually all traceable molecular traits. PMID:24822056

  13. Equilibrium and stability code for a diffuse plasma

    PubMed Central

    Betancourt, Octavio; Garabedian, Paul

    1976-01-01

    A computer code to investigate the equilibrium and stability of a diffuse plasma in three dimensions is described that generalizes earlier work on a sharp free boundary model. Toroidal equilibria of a plasma are determined by considering paths of steepest descent associated with a new version of the variational principle of magnetohydrodynamics that involves mapping a fixed coordinate domain onto the plasma. A discrete approximation of the potential energy is written down following the finite element method, and the resulting expression is minimized with respect to the values of the mapping at points of a rectangular grid. If a relative minimum of the discrete analogue of the energy is attained, the corresponding equilibrium is considered to be stable. PMID:16592310

  14. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  15. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 11 2011-10-01 2011-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  16. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  17. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  18. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of... should be adhered to by all Government employees, including office-holders: code of ethics for...

  19. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 39 Postal Service 1 2014-07-01 2014-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  20. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 39 Postal Service 1 2012-07-01 2012-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  1. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 39 Postal Service 1 2013-07-01 2013-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  2. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.

    1977-01-01

    An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.

  3. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Crab Delivery Condition Codes 3a Table 3a to Part 680 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code...

  4. Computer Code

    NASA Technical Reports Server (NTRS)

    1985-01-01

    COSMIC MINIVER, a computer code developed by NASA for analyzing aerodynamic heating and heat transfer on the Space Shuttle, has been used by Marquardt Company to analyze heat transfer on Navy/Air Force missile bodies. The code analyzes heat transfer by four different methods which can be compared for accuracy. MINIVER saved Marquardt three months in computer time and $15,000.

  5. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  6. Reasoning with Computer Code: a new Mathematical Logic

    NASA Astrophysics Data System (ADS)

    Pissanetzky, Sergio

    2013-01-01

    A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self

  7. Implementation of Hadamard spectroscopy using MOEMS as a coded aperture

    NASA Astrophysics Data System (ADS)

    Vasile, T.; Damian, V.; Coltuc, D.; Garoi, F.; Udrea, C.

    2015-02-01

    Although nowadays spectrometers reached a high level of performance, output signals are often weak and traditional slit spectrometers still confronts the problem of poor optical throughput, minimizing their efficiency in low light setup conditions. In order to overcome these issues, Hadamard Spectroscopy (HS) was implemented in a conventional Ebert Fastie type of spectrometer setup, by substituting the exit slit with a digital micro-mirror device (DMD) who acts like a coded aperture. The theory behind HS and the functionality of the DMD are presented. The improvements brought using HS are enlightened by means of a spectrometric experiment and higher SNR spectrum is acquired. Comparative experiments were conducted in order to emphasize the SNR differences between HS and scanning slit method. Results provide a SNR gain of 3.35 favoring HS. One can conclude the HS method effectiveness to be a great asset for low light spectrometric experiments.

  8. Development of a Monte-Carlo Radiative Transfer Code for the Juno/JIRAM Limb Measurements

    NASA Astrophysics Data System (ADS)

    Sindoni, G.; Adriani, A.; Mayorov, B.; Aoki, S.; Grassi, D.; Moriconi, M.; Oliva, F.

    2013-09-01

    The Juno/JIRAM instrument will acquire limb spectra of the Jupiter atmosphere in the infrared spectral range. The analysis of these spectra requires a radiative transfer code that takes into account the multiple scattering by particles in a spherical-shell atmosphere. Therefore, we are developing a code based on the Monte-Carlo approach to simulate the JIRAM observations. The validation of the code was performed by comparison with DISORT-based codes.

  9. A bandwidth and power-efficient coded modulation system for commercial satellite applications

    NASA Astrophysics Data System (ADS)

    Hemmati, F.; Miller, S.

    1992-03-01

    Coded modulation techniques for development of a B-ISDN-compatible modem/codec are investigated. The selected baseband processor system must support transmission of 155.52 Mbit/s of data over an Intelsat 72-MHz transponder. Performance objectives and fundamental system parameters, including channel symbol rate, code rate, and the modulation scheme, are determined. From several candidate codes, a concatenated coding system, consisting of a coded octal phase shift keying modulation as the inner code and a high-rate Reed-Solomon as the outer code, is selected, and its bit error rate performance is analyzed by computer simulation. The hardware implementation of the decoder for the selected code is also described.

  10. Part Six: Should Adult and Continuing Education Develop a Code of Ethics?

    ERIC Educational Resources Information Center

    Cunningham, Phyllis M.; And Others

    1992-01-01

    Cunningham views codes of ethics as inappropriate because they help those in or working toward positions of power and inhibit change. Sork and Welock identify benefits of developing a code and consequences of not having one. (SK)

  11. A novel method for performance improvement of optical CDMA system using alterable concatenated code

    NASA Astrophysics Data System (ADS)

    Qiu, Kun; Zhang, Chongfu

    2007-04-01

    A novel method using alterable concatenated code to pre-encode is proposed to reduce the impact of system impairment and multiple access interference (MAI) in optical code division multiple access (OCDMA) system, comprehensive comparisons between different concatenated code type and forward error correcting (FEC) scheme are studied by simulation. In the scheme, we apply concatenated coding to the embedded modulation scheme, and optical orthogonal code (OOC) is employed as address sequence code, an avalanche photodiode (APD) is selected as the system receiver. The bit error rate (BER) performance is derived taking into account the effect of some noises, dispersion power penalty and the MAI. From both theoretical analysis and numerical results, we can show that the proposed system has good performance at a BER of 10 -9 with a gain of 6.4 dB improvement achieved using the concatenated code as the pre-code, and this scheme permits implementation of a cost effective OCDMA system.

  12. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-01-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  13. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  14. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Astrophysics Data System (ADS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-02-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  15. User's guide for a flat wake rotor inflow/wake velocity prediction code, DOWN

    NASA Technical Reports Server (NTRS)

    Wilson, John C.

    1991-01-01

    A computer code named DOWN was created to implement a flat wake theory for the calculation of rotor inflow and wake velocities. A brief description of the code methodology and instructions for its use are given. The code will be available from NASA's Computer Software Management and Information Center (COSMIC).

  16. Adaptive rezoner in a two-dimensional Lagrangian hydrodynamic code

    SciTech Connect

    Pyun, J.J.; Saltzman, J.S.; Scannapieco, A.J.; Carroll, D.

    1985-01-01

    In an effort to increase spatial resolution without adding additional meshes, an adaptive mesh was incorporated into a two-dimensional Lagrangian hydrodynamics code along with two-dimensional flux corrected (FCT) remapper. The adaptive mesh automatically generates a mesh based on smoothness and orthogonality, and at the same time also tracks physical conditions of interest by focusing mesh points in regions that exhibit those conditions; this is done by defining a weighting function associated with the physical conditions to be tracked. The FCT remapper calculates the net transportive fluxes based on a weighted average of two fluxes computed by a low-order scheme and a high-order scheme. This averaging procedure produces solutions which are conservative and nondiffusive, and maintains positivity. 10 refs., 12 figs.

  17. Adaptation of a neutron diffraction detector to coded aperture imaging

    SciTech Connect

    Vanier, P.E.; Forman, L.

    1997-02-01

    A coded aperture neutron imaging system developed at Brookhaven National Laboratory (BNL) has demonstrated that it is possible to record not only a flux of thermal neutrons at some position, but also the directions from whence they came. This realization of an idea which defied the conventional wisdom has provided a device which has never before been available to the nuclear physics community. A number of potential applications have been explored, including (1) counting warheads on a bus or in a storage area, (2) investigating inhomogeneities in drums of Pu-containing waste to facilitate non-destructive assays, (3) monitoring of vaults containing accountable materials, (4) detection of buried land mines, and (5) locating solid deposits of nuclear material held up in gaseous diffusion plants.

  18. DYNAVAC: a transient-vacuum-network analysis code

    SciTech Connect

    Deis, G.A.

    1980-07-08

    This report discusses the structure and use of the program DYNAVAC, a new transient-vacuum-network analysis code implemented on the NMFECC CDC-7600 computer. DYNAVAC solves for the transient pressures in a network of up to twenty lumped volumes, interconnected in any configuration by specified conductances. Each volume can have an internal gas source, a pumping speed, and any initial pressure. The gas-source rates can vary with time in any piecewise-linear manner, and up to twenty different time variations can be included in a single problem. In addition, the pumping speed in each volume can vary with the total gas pumped in the volume, thus simulating the saturation of surface pumping. This report is intended to be both a general description and a user's manual for DYNAVAC.

  19. A code mapping scheme for dataflow software pipelining

    SciTech Connect

    Gao, G.R. )

    1991-01-01

    The rapid advances in computer architecture and VLSI device technology make it possible to build massively parallel computers integrating the functions of hundreds or thousands of hardware units. However, the success of such massive parallel systems must be based on a sound model of parallel computation - from its programming model down to its architecture. To this end, the dataflow model of computation offers a sound, simple, yet powerful model of parallel computing. This book presents a pipelined code mapping scheme for array operations on static dataflow architectures known as dataflow software pipelining. Algorithmic balancing techniques are developed to transform dataflow programs into fully pipelined data flow graphs. A compiling scheme, formulated to map array operations in a pipelined fashion, and the optimization of array operations are also presented. The mapping technique uses both global and local optimization, unified by the pipeline principle.

  20. The Penal Code (Amendment) Act 1989 (Act A727), 1989.

    PubMed

    1989-01-01

    In 1989, Malaysia amended its penal code to provide that inducing an abortion is not an offense if the procedure is performed by a registered medical practitioner who has determined that continuation of the pregnancy would risk the life of the woman or damage her mental or physical health. Additional amendments include a legal description of the conditions which constitute the act of rape. Among these conditions is intercourse with or without consent with a woman under the age of 16. Malaysia fails to recognize rape within a marriage unless the woman is protected from her husband by judicial decree or is living separately from her husband according to Muslim custom. Rape is punishable by imprisonment for a term of 5-20 years and by whipping. PMID:12344384

  1. A technique for importing an arbitrary distribution of mass and magnetic field from an MHD code into a PIC code.

    NASA Astrophysics Data System (ADS)

    Swanekamp, S. B.; Oliver, B. V.; Grossmann, J. M.; Smithe, D.; Ludeking, L.

    1996-11-01

    The current understanding of plasma opening switch (POS) operation is as follows. During the conduction phase the switch plasma is redistributed by MHD forces. This redistribution of mass leads to the formation of a low density region in the switch where a 1-3 mm gap in the plasma is believed to form as the switch opens and magnetic energy is transferred between the primary storage inductor and the load. The processes of gap formation and power delivery are not very well understood. It is generally accepted that the assumptions of MHD theory are not valid during the gap formation and power delivery processes because electron inertia and the lack of space-charge neutrality are expected to play a key role. To study non-MHD processes during the gap formation process and power delivery phase of the POS, we have developed a technique for importing an arbitrary state of an MHD code into the PIC code MAGIC. At present the plasma kinetic pressure is ignored during the initialization of particles. Work supported by Defense Nuclear Agency. ^ JAYCOR, Vienna, VA 22102. ^ NRL-NRC Research Associate.

  2. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block

  3. Regulations and Ethical Considerations for Astronomy Education Research III: A Suggested Code of Ethics

    NASA Astrophysics Data System (ADS)

    Brogt, Erik; Foster, Tom; Dokter, Erin; Buxner, Sanlyn; Antonellis, Jessie

    We present an argument for, and suggested implementation of, a code of ethics for the astronomy education research community. This code of ethics is based on legal and ethical considerations set forth by U.S. federal regulations and the existing code of conduct of the American Educational Research Association. We also provide a fictitious research study as an example for working through the suggested code of ethics.

  4. A High-Rate Space-Time Block Code with Full Diversity

    NASA Astrophysics Data System (ADS)

    Gao, Zhenzhen; Zhu, Shihua; Zhong, Zhimeng

    A new high-rate space-time block code (STBC) with full transmit diversity gain for four transmit antennas based on a generalized Alamouti code structure is proposed. The proposed code has lower Maximum Likelihood (ML) decoding complexity than the Double ABBA scheme does. Constellation rotation is used to maximize the diversity product. With the optimal rotated constellations, the proposed code significantly outperforms some known high-rate STBCs in the literature with similar complexity and the same spectral efficiency.

  5. A User's Guide to the PLTEMP/ANL Code

    SciTech Connect

    Olson, Arne P.; Kalimullah, M.

    2015-07-07

    PLTEMP/ANL V4.2 is a FORTRAN program that obtains a steady-state flow and temperature solution for a nuclear reactor core, or for a single fuel assembly. It is based on an evolutionary sequence of ''PLTEMP'' codes in use at ANL for the past 20 years. Fueled and non-fueled regions are modeled. Each fuel assembly consists of one or more plates or tubes separated by coolant channels. The fuel plates may have one to five layers of different materials, each with heat generation. The width of a fuel plate may be divided into multiple longitudinal stripes, each with its own axial power shape. The temperature solution is effectively 2-dimensional. It begins with a one-dimensional solution across all coolant channels and fuel plates/tubes within a given fuel assembly, at the entrance to the assembly. The temperature solution is repeated for each axial node along the length of the fuel assembly. The geometry may be either slab or radial, corresponding to fuel assemblies made of a series of flat (or slightly curved) plates, or of nested tubes. A variety of thermal-hydraulic correlations are available with which to determine safety margins such as Onset-of- Nucleate boiling (ONB), departure from nucleate boiling (DNB), and onset of flow instability (FI). Coolant properties for either light or heavy water are obtained from FORTRAN functions rather than from tables. The code is intended for thermal-hydraulic analysis of research reactor performance in the sub-cooled boiling regime. Both turbulent and laminar flow regimes can be modeled. Options to calculate both forced flow and natural circulation are available. A general search capability is available (Appendix XII) to greatly reduce the reactor analyst’s time.

  6. A user's guide to the PLTEMP/ANL code.

    SciTech Connect

    Kalimullah, M.

    2011-07-05

    PLTEMP/ANL V4.1 is a FORTRAN program that obtains a steady-state flow and temperature solution for a nuclear reactor core, or for a single fuel assembly. It is based on an evolutionary sequence of ''PLTEMP'' codes in use at ANL for the past 20 years. Fueled and non-fueled regions are modeled. Each fuel assembly consists of one or more plates or tubes separated by coolant channels. The fuel plates may have one to five layers of different materials, each with heat generation. The width of a fuel plate may be divided into multiple longitudinal stripes, each with its own axial power shape. The temperature solution is effectively 2-dimensional. It begins with a one-dimensional solution across all coolant channels and fuel plates/tubes within a given fuel assembly, at the entrance to the assembly. The temperature solution is repeated for each axial node along the length of the fuel assembly. The geometry may be either slab or radial, corresponding to fuel assemblies made of a series of flat (or slightly curved) plates, or of nested tubes. A variety of thermal-hydraulic correlations are available with which to determine safety margins such as Onset-of-Nucleate boiling (ONB), departure from nucleate boiling (DNB), and onset of flow instability (FI). Coolant properties for either light or heavy water are obtained from FORTRAN functions rather than from tables. The code is intended for thermal-hydraulic analysis of research reactor performance in the sub-cooled boiling regime. Both turbulent and laminar flow regimes can be modeled. Options to calculate both forced flow and natural circulation are available. A general search capability is available (Appendix XII) to greatly reduce the reactor analyst's time.

  7. DNA codes

    SciTech Connect

    Torney, D. C.

    2001-01-01

    We have begun to characterize a variety of codes, motivated by potential implementation as (quaternary) DNA n-sequences, with letters denoted A, C The first codes we studied are the most reminiscent of conventional group codes. For these codes, Hamming similarity was generalized so that the score for matched letters takes more than one value, depending upon which letters are matched [2]. These codes consist of n-sequences satisfying an upper bound on the similarities, summed over the letter positions, of distinct codewords. We chose similarity 2 for matches of letters A and T and 3 for matches of the letters C and G, providing a rough approximation to double-strand bond energies in DNA. An inherent novelty of DNA codes is 'reverse complementation'. The latter may be defined, as follows, not only for alphabets of size four, but, more generally, for any even-size alphabet. All that is required is a matching of the letters of the alphabet: a partition into pairs. Then, the reverse complement of a codeword is obtained by reversing the order of its letters and replacing each letter by its match. For DNA, the matching is AT/CG because these are the Watson-Crick bonding pairs. Reversal arises because two DNA sequences form a double strand with opposite relative orientations. Thus, as will be described in detail, because in vitro decoding involves the formation of double-stranded DNA from two codewords, it is reasonable to assume - for universal applicability - that the reverse complement of any codeword is also a codeword. In particular, self-reverse complementary codewords are expressly forbidden in reverse-complement codes. Thus, an appropriate distance between all pairs of codewords must, when large, effectively prohibit binding between the respective codewords: to form a double strand. Only reverse-complement pairs of codewords should be able to bind. For most applications, a DNA code is to be bi-partitioned, such that the reverse-complementary pairs are separated

  8. Algorithms for a very high speed universal noiseless coding module

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Yeh, Pen-Shu

    1991-01-01

    The algorithmic definitions and performance characterizations are presented for a high performance adaptive coding module. Operation of at least one of these (single chip) implementations is expected to exceed 500 Mbits/s under laboratory conditions. Operation of a companion decoding module should operate at up to half the coder's rate. The module incorporates a powerful noiseless coder for Standard Form Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers where the smaller integers are more likely than the larger ones). Performance close to data entropies can be expected over a Dynamic Range of from 1.5 to 12 to 14 bits/sample (depending on the implementation).

  9. A hippocampal network for spatial coding during immobility and sleep.

    PubMed

    Kay, Kenneth; Sosa, Marielena; Chung, Jason E; Karlsson, Mattias P; Larkin, Margaret C; Frank, Loren M

    2016-03-10

    How does an animal know where it is when it stops moving? Hippocampal place cells fire at discrete locations as subjects traverse space, thereby providing an explicit neural code for current location during locomotion. In contrast, during awake immobility, the hippocampus is thought to be dominated by neural firing representing past and possible future experience. The question of whether and how the hippocampus constructs a representation of current location in the absence of locomotion has been unresolved. Here we report that a distinct population of hippocampal neurons, located in the CA2 subregion, signals current location during immobility, and does so in association with a previously unidentified hippocampus-wide network pattern. In addition, signalling of location persists into brief periods of desynchronization prevalent in slow-wave sleep. The hippocampus thus generates a distinct representation of current location during immobility, pointing to mnemonic processing specific to experience occurring in the absence of locomotion. PMID:26934224

  10. Error threshold for the surface code in a superohmic environment

    NASA Astrophysics Data System (ADS)

    Lopez-Delgado, Daniel A.; Novais, E.; Mucciolo, Eduardo R.; Caldeira, Amir O.

    Using the Keldysh formalism, we study the fidelity of a quantum memory over multiple quantum error correction cycles when the physical qubits interact with a bosonic bath at zero temperature. For encoding, we employ the surface code, which has one of the highest error thresholds in the case of stochastic and uncorrelated errors. The time evolution of the fidelity of the resulting two-dimensional system is cast into a statistical mechanics phase transition problem on a three-dimensional spin lattice, and the error threshold is determined by the critical temperature of the spin model. For superohmic baths, we find that time does not affect the error threshold: its value is the same for one or an arbitrary number of quantum error correction cycles. Financial support Fapesp, and CNPq (Brazil).

  11. A generalized information function applied to the genetic code.

    PubMed

    Alvager, T; Graham, G; Hilleke, R; Hutchison, D; Westgard, J

    1990-01-01

    The problem of the partitioning of the degeneracy of the codons in the genetic code is considered in the framework of a generalized information function IG = c sigma kpk(ln pk + G(Ek] where k represents the number of codons in a specific degeneracy class and G(Ek) is an arbitrary real valued function. For G(Ek) = 0 the Shannon information function is recovered. For a particular choice of G(Ek) that takes the dominance of even degeneracies into account, it is found by direct numerical calculations that the correct degeneracy partitioning appears as optimal values of the Ig function. This results is also supported by optimization calculations in which the generalized information function is regarded as a continuous function in the degeneracy variables. PMID:2073543

  12. A Software Safety Certification Plug-in for Automated Code Generators (Executive Briefing)

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Schumann, Johann; Greaves, Doug

    2006-01-01

    A viewgraph presentation describing a certification tool to check the safety of auto-generated codes is shown. The topics include: 1) Auto-generated Code at NASA; 2) Safety of Auto-generated Code; 3) Technical Approach; and 4) Project Plan.

  13. Regulations and Ethical Considerations for Astronomy Education Research III: A Suggested Code of Ethics

    ERIC Educational Resources Information Center

    Brogt, Erik; Foster, Tom; Dokter, Erin; Buxner, Sanlyn; Antonellis, Jessie

    2009-01-01

    We present an argument for, and suggested implementation of, a code of ethics for the astronomy education research community. This code of ethics is based on legal and ethical considerations set forth by U.S. federal regulations and the existing code of conduct of the American Educational Research Association. We also provide a fictitious research…

  14. Plaspp: A New X-Ray Postprocessing Capability for ASCI Codes

    SciTech Connect

    Pollak, Gregory

    2003-09-01

    This report announces the availability of the beta version of a (partly) new code, Plaspp (Plasma Postprocessor). This code postprocesses (graphics) dumps from at least two ASCI code suites: Crestone Project and Shavano Project. The basic structure of the code follows that of TDG, the equivalent postprocessor code for LASNEX. In addition to some new commands, the basic differences between TDG and Plaspp are the following: Plaspp uses a graphics dump instead of the unique TDG dump, it handles the unstructured meshes that the ASCI codes produce, and it can use its own multigroup opacity data. Because of the dump format, this code should be useable by any code that produces Cartesian, cylindrical, or spherical graphics formats. This report details the new commands; the required information to be placed on the dumps; some new commands and edits that are applicable to TDG as well, but have not been documented elsewhere; and general information about execution on the open and secure networks.

  15. The Navajo Code Talkers: A Secret World War II Memorandum.

    ERIC Educational Resources Information Center

    Jevec, Adam; Potter, Lee Ann

    2001-01-01

    Provides background information on the development of and work performed by the Navajo code talkers during World War II. Includes teaching activities for classroom use as well as examples from the code. Includes the Navajo dictionary and words with the English and Navajo meaning. (CMK)

  16. A user's manual for the Loaded Microstrip Antenna Code (LMAC)

    NASA Technical Reports Server (NTRS)

    Forrai, D. P.; Newman, E. H.

    1988-01-01

    The use of the Loaded Microstrip Antenna Code is described. The geometry of this antenna is shown and its dimensions are described in terms of the program outputs. The READ statements for the inputs are detailed and typical values are given where applicable. The inputs of four example problems are displayed with the corresponding output of the code given in the appendices.

  17. 32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Codes and Definitions of Functional Areas A Appendix A to Part 169a National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Pt. 169a, App. A Appendix A to Part 169a—Codes and Definitions of Functional Areas This list...

  18. 32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 1 2011-07-01 2011-07-01 false Codes and Definitions of Functional Areas A Appendix A to Part 169a National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Pt. 169a, App. A Appendix A to Part 169a—Codes and Definitions of Functional Areas This list...

  19. 32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 1 2012-07-01 2012-07-01 false Codes and Definitions of Functional Areas A Appendix A to Part 169a National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Pt. 169a, App. A Appendix A to Part 169a—Codes and Definitions of Functional Areas This list...

  20. 32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 1 2014-07-01 2014-07-01 false Codes and Definitions of Functional Areas A Appendix A to Part 169a National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Pt. 169a, App. A Appendix A to Part 169a—Codes and Definitions of Functional Areas This list...

  1. 32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 1 2013-07-01 2013-07-01 false Codes and Definitions of Functional Areas A Appendix A to Part 169a National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Pt. 169a, App. A Appendix A to Part 169a—Codes and Definitions of Functional Areas This list...

  2. A user`s manual for the computer code HORSMIC

    SciTech Connect

    Russo, A.J.

    1994-01-01

    The code HORSMIC was written to solve the problem of calculating the shape of hydrocarbon (gas or liquid) storage caverns formed by solution mining in bedded salt formations. In the past many storage cavems have been formed by vertically drilling into salt dome formations and solution mining large-aspect-ratio, vertically-axisymmetric caverns. This approach is generally not satisfactory for shallow salt beds because it would result in geomechanically-unstable, pancake-shaped caverns. In order to produce a high aspect ratio cavern in the horizontal direction a more complicated strategy must be employed. This report describes one such strategy, and documents the use of the computer model HORSMIC which can be used to estimate the shape of the cavern produced by a prescribed leaching schedule. Multiple trials can then be used to investigate the effects of various pipe hole configurations in order to optimize over the cavern shape.

  3. A Watermarking Scheme for High Efficiency Video Coding (HEVC)

    PubMed Central

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate. PMID:25144455

  4. Visualization of elastic wavefields computed with a finite difference code

    SciTech Connect

    Larsen, S.; Harris, D.

    1994-11-15

    The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.

  5. A model code for the radiative theta pinch

    SciTech Connect

    Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.

    2014-07-15

    A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.

  6. Heparan sulfate proteoglycans: a sugar code for vertebrate development?

    PubMed

    Poulain, Fabienne E; Yost, H Joseph

    2015-10-15

    Heparan sulfate proteoglycans (HSPGs) have long been implicated in a wide range of cell-cell signaling and cell-matrix interactions, both in vitro and in vivo in invertebrate models. Although many of the genes that encode HSPG core proteins and the biosynthetic enzymes that generate and modify HSPG sugar chains have not yet been analyzed by genetics in vertebrates, recent studies have shown that HSPGs do indeed mediate a wide range of functions in early vertebrate development, for example during left-right patterning and in cardiovascular and neural development. Here, we provide a comprehensive overview of the various roles of HSPGs in these systems and explore the concept of an instructive heparan sulfate sugar code for modulating vertebrate development. PMID:26487777

  7. National Combustion Code: A Multidisciplinary Combustor Design System

    NASA Technical Reports Server (NTRS)

    Stubbs, Robert M.; Liu, Nan-Suey

    1997-01-01

    The Internal Fluid Mechanics Division conducts both basic research and technology, and system technology research for aerospace propulsion systems components. The research within the division, which is both computational and experimental, is aimed at improving fundamental understanding of flow physics in inlets, ducts, nozzles, turbomachinery, and combustors. This article and the following three articles highlight some of the work accomplished in 1996. A multidisciplinary combustor design system is critical for optimizing the combustor design process. Such a system should include sophisticated computer-aided design (CAD) tools for geometry creation, advanced mesh generators for creating solid model representations, a common framework for fluid flow and structural analyses, modern postprocessing tools, and parallel processing. The goal of the present effort is to develop some of the enabling technologies and to demonstrate their overall performance in an integrated system called the National Combustion Code.

  8. Stability codes for a liquid rocket implemented for use on a PC

    NASA Technical Reports Server (NTRS)

    Armstrong, Wilbur; Doane, George C., III; Dean, Garvin

    1992-01-01

    The high frequency code has been made an interactive code using FORTRAN 5.0. The option to plot n-tau curves was added using the graphics routines of FORTRAN 5.0 and GRAFMATIC. The user is now able to run with input values non-dimensional (as in the original code) or dimensional. Input data may be modified from the keyboard. The low and intermediate frequency codes have been run through a set of variations. This will help the user to understand how the stability of a configuration will change if any of the input data changes.

  9. Analysis of a two-dimensional type 6 shock-interference pattern using a perfect-gas code and a real-gas code

    NASA Technical Reports Server (NTRS)

    Bertin, J. J.; Graumann, B. W.

    1973-01-01

    Numerical codes were developed to calculate the two dimensional flow field which results when supersonic flow encounters double wedge configurations whose angles are such that a type 4 pattern occurs. The flow field model included the shock interaction phenomena for a delta wing orbiter. Two numerical codes were developed, one which used the perfect gas relations and a second which incorporated a Mollier table to define equilibrium air properties. The two codes were used to generate theoretical surface pressure and heat transfer distributions for velocities from 3,821 feet per second to an entry condition of 25,000 feet per second.

  10. A code of ethics for the life sciences.

    PubMed

    Jones, Nancy L

    2007-03-01

    The activities of the life sciences are essential to provide solutions for the future, for both individuals and society. Society has demanded growing accountability from the scientific community as implications of life science research rise in influence and there are concerns about the credibility, integrity and motives of science. While the scientific community has responded to concerns about its integrity in part by initiating training in research integrity and the responsible conduct of research, this approach is minimal. The scientific community justifies itself by appealing to the ethos of science, claiming academic freedom, self-direction, and self-regulation, but no comprehensive codification of this foundational ethos has been forthcoming. A review of the professional norms of science and a prototype code of ethics for the life sciences provide a framework to spur discussions within the scientific community to define scientific professionalism. A formalization of implicit principles can provide guidance for recognizing divergence from the norms, place these norms within a context that would enhance education of trainees, and provide a framework for discussing externally and internally applied pressures that are influencing the practice of science. The prototype code articulates the goal for life sciences research and the responsibilities associated with the freedom of exploration, the principles for the practice of science, and the virtues of the scientists themselves. The time is ripe for scientific communities to reinvigorate professionalism and define the basis of their social contract. Codifying the basis of the social contract between science and society will sustain public trust in the scientific enterprise. PMID:17703607

  11. A benchmark study for glacial isostatic adjustment codes

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2011-04-01

    The study of glacial isostatic adjustment (GIA) is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to loading is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements (e.g. GRACE and GOCE) to the projections of future sea level trends in response to climate change. Modern modelling approaches to GIA are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated; a community benchmark data set would clearly be valuable. Following the example of the mantle convection community, here we present, for the first time, the results of a benchmark study of codes designed to model GIA. This has taken place within a collaboration facilitated through European Cooperation in Science and Technology (COST) Action ES0701. The approaches benchmarked are based on significantly different codes and different techniques. The test computations are based on models with spherical symmetry and Maxwell rheology and include inputs from different methods and solution techniques: viscoelastic normal modes, spectral-finite elements and finite elements. The tests involve the loading and tidal Love numbers and their relaxation spectra, the deformation and gravity variations driven by surface loads characterized by simple geometry and time history and the rotational fluctuations in response to glacial unloading. In spite of the significant differences in the numerical methods employed, the test computations show a satisfactory agreement between the results provided by the participants.

  12. RMC - A Monte Carlo code for reactor physics analysis

    SciTech Connect

    Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, A.; Yu, J.; Sun, J.; Fan, X.; Yu, G.

    2013-07-01

    A new Monte Carlo neutron transport code RMC has been being developed by Department of Engineering Physics, Tsinghua University, Beijing as a tool for reactor physics analysis on high-performance computing platforms. To meet the requirements of reactor analysis, RMC now has such functions as criticality calculation, fixed-source calculation, burnup calculation and kinetics simulations. Some techniques for geometry treatment, new burnup algorithm, source convergence acceleration, massive tally and parallel calculation, and temperature dependent cross sections processing are researched and implemented in RMC to improve the efficiency. Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances. (authors)

  13. A secure RFID authentication protocol adopting error correction code.

    PubMed

    Chen, Chien-Ming; Chen, Shuai-Min; Zheng, Xinying; Chen, Pei-Yu; Sun, Hung-Min

    2014-01-01

    RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance. PMID:24959619

  14. A Comparison of Source Code Plagiarism Detection Engines

    NASA Astrophysics Data System (ADS)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  15. A Secure RFID Authentication Protocol Adopting Error Correction Code

    PubMed Central

    Zheng, Xinying; Chen, Pei-Yu

    2014-01-01

    RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance. PMID:24959619

  16. A robust low-rate coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.; Arikan, E. (Editor)

    1991-01-01

    Due to the rapidly evolving field of image processing and networking, video information promises to be an important part of telecommunication systems. Although up to now video transmission has been transported mainly over circuit-switched networks, it is likely that packet-switched networks will dominate the communication world in the near future. Asynchronous transfer mode (ATM) techniques in broadband-ISDN can provide a flexible, independent and high performance environment for video communication. For this paper, the network simulator was used only as a channel in this simulation. Mixture blocking coding with progressive transmission (MBCPT) has been investigated for use over packet networks and has been found to provide high compression rate with good visual performance, robustness to packet loss, tractable integration with network mechanics and simplicity in parallel implementation.

  17. Code-Switching in English as a Foreign Language Classroom: Teachers' Attitudes

    ERIC Educational Resources Information Center

    Ibrahim, Engku Haliza Engku; Shah, Mohamed Ismail Ahamad; Armia, Najwa Tgk.

    2013-01-01

    Code-switching has always been an intriguing phenomenon to sociolinguists. While the general attitude to it seems negative, people seem to code-switch quite frequently. Teachers of English as a foreign language too frequently claim that they do not like to code-switch in the language classroom for various reasons--many are of the opinion that only…

  18. A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.

    ERIC Educational Resources Information Center

    Bobka, Marilyn E.; Subramaniam, J.B.

    The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

  19. A Comparative Study of Japanese/English Bilingual Code-Switching in Three Different Domains.

    ERIC Educational Resources Information Center

    Taura, Hideyuki

    This study examined Japanese/English code-switching in three different contexts: a bilingual radio program broadcast in Japan; language of two bilingual siblings; and an adult bilingual dinner party. Particular attention was paid to the situational meanings of code-switching and to politeness issues. Code-switching was examined first at four…

  20. A vectorized code for the pseudofermion simulation of QCD with dynamical quarks

    NASA Astrophysics Data System (ADS)

    Campostrini, Massimo; Moriarty, Kevin J. M.; Potvin, Jean; Rebbi, Claudio

    1988-08-01

    We present a FORTRAN code for the Monte Carlo simulation of Quantum Chromodynamics with dynamical fermions, using the pseudofermion algorithm. The code is fully vectorized and optimized for the CDC CYBER 205, taking advantage of high performance features like 32-bit arithmetic, gather/scatter hardware and asynchronous I/O. Nonetheless, the code is largely portable and performs well on other vector computers.

  1. 76 FR 57795 - Agency Request for Renewal of a Previously Approved Collection; Disclosure of Code Sharing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-16

    ... Code Sharing Arrangements and Long-Term Wet Leases AGENCY: Office of the Secretary. ACTION: Notice and... 20590. SUPPLEMENTARY INFORMATION: OMB Control Number: 2105-0537. Title: Disclosure of Code Sharing... between cooperating carriers, at least one of the airline designator codes used on a flight is...

  2. A Simple Model of Optimal Population Coding for Sensory Systems

    PubMed Central

    Doi, Eizaburo; Lewicki, Michael S.

    2014-01-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery. PMID:25121492

  3. ARCHY (Analysis and Reverse Engineering of Code Using Hierarchy and Yourdon): A tool for Fortran code maintenance and development

    SciTech Connect

    Aull, J.E.

    1990-10-01

    Analysis and Reverse Engineering of Code Using Hierarchy and Yourdon (ARCHY) diagrams is a tool for development and maintenance of FORTRAN programs. When FORTRAN source code is read by ARCHY, it automatically creates a database that includes a data dictionary, which lists each variable, its dimensions, type, category (set, referenced, passed), module calling structure, and common block information. The database exists in an ASCII file that can be directly edited or maintained with the ARCHY database editor. The database is used by ARCHY to product structure charts and Yourdon data flow diagrams in PostScript format. ARCHY also transfers database information such as a variable definitions, module descriptions, and technical references to and from module headers. ARCHY contains several utilities for making programs more readable. It can automatically indent the body of loops and conditionals and resequence statement labels. Various language extensions are translated into FORTRAN-77 to increase code portability. ARCHY frames comment statements and groups FORMAT statements at the end of modules. It can alphabetize modules within a program, end-of-line labels can be added, and it can also change executable statements to upper or lower case. ARCHY runs under the VAX-VMS operating system and inputs from VAX-FORTRAN, IBM-FORTRAN, and CRAY FORTRAN sources files.

  4. Assessment of MARMOT. A Mesoscale Fuel Performance Code

    SciTech Connect

    Tonks, M. R.; Schwen, D.; Zhang, Y.; Chakraborty, P.; Bai, X.; Fromm, B.; Yu, J.; Teague, M. C.; Andersson, D. A.

    2015-04-01

    MARMOT is the mesoscale fuel performance code under development as part of the US DOE Nuclear Energy Advanced Modeling and Simulation Program. In this report, we provide a high level summary of MARMOT, its capabilities, and its current state of validation. The purpose of MARMOT is to predict the coevolution of microstructure and material properties of nuclear fuel and cladding. It accomplished this using the phase field method coupled to solid mechanics and heat conduction. MARMOT is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE), and much of its basic capability in the areas of the phase field method, mechanics, and heat conduction come directly from MOOSE modules. However, additional capability specific to fuel and cladding is available in MARMOT. While some validation of MARMOT has been completed in the areas of fission gas behavior and grain growth, much more validation needs to be conducted. However, new mesoscale data needs to be obtained in order to complete this validation.

  5. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  6. A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    Johnson, J.O.

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  7. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    SciTech Connect

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident on an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.

  8. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  9. A large scale code resolution service network in the Internet of Things.

    PubMed

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  10. A Large Scale Code Resolution Service Network in the Internet of Things

    PubMed Central

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  11. Assessement of Codes and Standards Applicable to a Hydrogen Production Plant Coupled to a Nuclear Reactor

    SciTech Connect

    M. J. Russell

    2006-06-01

    This is an assessment of codes and standards applicable to a hydrogen production plant to be coupled to a nuclear reactor. The result of the assessment is a list of codes and standards that are expected to be applicable to the plant during its design and construction.

  12. The Evolution of a Coding Schema in a Paced Program of Research

    ERIC Educational Resources Information Center

    Winters, Charlene A.; Cudney, Shirley; Sullivan, Therese

    2010-01-01

    A major task involved in the management, analysis, and integration of qualitative data is the development of a coding schema to facilitate the analytic process. Described in this paper is the evolution of a coding schema that was used in the analysis of qualitative data generated from online forums of middle-aged women with chronic conditions who…

  13. Interfacing a fluid code (Induct95) with a particle code (PDP1) to obtain ion energy distributions in inductive and capacitive discharges

    SciTech Connect

    Kawamura, E.; Verboncoeur, J.P.; Birdsall, C.K.

    1996-12-31

    The goal is to obtain the ion angular and energy distributions at the wafer of inductive and capacitive discharges. To do this on a standard uniform mesh with particle codes alone would be impractical because of the long time scale nature of the problem (i.e., 10{sup 6} time steps). A solution is to use a fluid code to simulate the bulk source region, while using a particle-in-cell code to simulate the sheath region. Induct95 is a 2d fluid code which can simulate inductive and capacitive discharges. Though it does not resolve the sheath region near the wafer, it provides diagnostics for the collisional bulk plasma (i.e., potentials, temperatures, fluxes, etc.). Also, fluid codes converge to equilibrium much faster than particle codes in collisional regimes PDP1 is a 1d3v particle-in-cell code which can simulate rf discharges. It can resolve the sheath region and obtain the ion angular and energy distributions at the wafer target. The overall running time is expected to be that of the fluid code.

  14. Description of a parallel, 3D, finite element, hydrodynamics-diffusion code

    SciTech Connect

    Milovich, J L; Prasad, M K; Shestakov, A I

    1999-04-11

    We describe a parallel, 3D, unstructured grid finite element, hydrodynamic diffusion code for inertial confinement fusion (ICF) applications and the ancillary software used to run it. The code system is divided into two entities, a controller and a stand-alone physics code. The code system may reside on different computers; the controller on the user's workstation and the physics code on a supercomputer. The physics code is composed of separate hydrodynamic, equation-of-state, laser energy deposition, heat conduction, and radiation transport packages and is parallelized for distributed memory architectures. For parallelization, a SPMD model is adopted; the domain is decomposed into a disjoint collection of subdomains, one per processing element (PE). The PEs communicate using MPI. The code is used to simulate the hydrodynamic implosion of a spherical bubble.

  15. A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Davis, Paul Christopher

    1992-01-01

    A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.

  16. TACI: a code for interactive analysis of neutron data produced by a tissue equivalent proportional counter

    SciTech Connect

    Cummings, F.M.

    1984-06-01

    The TEPC analysis code (TACI) is a computer program designed to analyze pulse height data generated by a tissue equivalent proportional counter (TEPC). It is written in HP BASIC and is for use on an HP-87XM personal computer. The theory of TEPC analysis upon which this code is based is summarized.

  17. Teaching, Morality, and Responsibility: A Structuralist Analysis of a Teachers' Code of Conduct

    ERIC Educational Resources Information Center

    Shortt, Damien; Hallett, Fiona; Spendlove, David; Hardy, Graham; Barton, Amanda

    2012-01-01

    In this paper we conduct a Structuralist analysis of the General Teaching Council for England's "Code of Conduct and Practice for Registered Teachers" in order to reveal how teachers are required to fulfil an apparently impossible social role. The GTCE's "Code," we argue, may be seen as an attempt by a government agency to resolve the political…

  18. Is it Code Imperfection or 'garbage in Garbage Out'? Outline of Experiences from a Comprehensive Adr Code Verification

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2013-12-01

    ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and

  19. On the validation of a code and a turbulence model appropriate to circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.

    1988-01-01

    A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.

  20. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.