Science.gov

Sample records for a codes

  1. A coded tracking telemetry system

    USGS Publications Warehouse

    Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.

    1989-01-01

    We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.

  2. AEDS Property Classification Code Manual.

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…

  3. HADES, A Radiographic Simulation Code

    SciTech Connect

    Aufderheide, M.B.; Slone, D.M.; Schach von Wittenau, A.E.

    2000-08-18

    We describe features of the HADES radiographic simulation code. We begin with a discussion of why it is useful to simulate transmission radiography. The capabilities of HADES are described, followed by an application of HADES to a dynamic experiment recently performed at the Los Alamos Neutron Science Center. We describe quantitative comparisons between experimental data and HADES simulations using a copper step wedge. We conclude with a short discussion of future work planned for HADES.

  4. The Proteomic Code: a molecular recognition code for proteins

    PubMed Central

    Biro, Jan C

    2007-01-01

    Background The Proteomic Code is a set of rules by which information in genetic material is transferred into the physico-chemical properties of amino acids. It determines how individual amino acids interact with each other during folding and in specific protein-protein interactions. The Proteomic Code is part of the redundant Genetic Code. Review The 25-year-old history of this concept is reviewed from the first independent suggestions by Biro and Mekler, through the works of Blalock, Root-Bernstein, Siemion, Miller and others, followed by the discovery of a Common Periodic Table of Codons and Nucleic Acids in 2003 and culminating in the recent conceptualization of partial complementary coding of interacting amino acids as well as the theory of the nucleic acid-assisted protein folding. Methods and conclusions A novel cloning method for the design and production of specific, high-affinity-reacting proteins (SHARP) is presented. This method is based on the concept of proteomic codes and is suitable for large-scale, industrial production of specifically interacting peptides. PMID:17999762

  5. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  6. A Better Handoff for Code Officials

    SciTech Connect

    Conover, David R.; Yerkes, Sara

    2010-09-24

    The U.S. Department of Energy's Building Energy Codes Program has partnered with ICC to release the new Building Energy Codes Resource Guide: Code Officials Edition. We created this binder of practical materials for a simple reason: code officials are busy learning and enforcing several codes at once for the diverse buildings across their jurisdictions. This doesn’t leave much time to search www.energycodes.gov, www.iccsafe.org, or the range of other helpful web-based resources for the latest energy codes tools, support, and information. So, we decided to bring the most relevant materials to code officials in a way that works best with their daily routine, and point to where they can find even more. Like a coach’s game plan, the Resource Guide is an "energy playbook" for code officials.

  7. A Mathematical Representation of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Hill, Vanessa J.; Rowlands, Peter

    Algebraic and geometric representations of the genetic code are used to show their functions in coding for amino acids. The algebra is a 64-part vector quaternion combination, and the geometry is based on the structure of the regular icosidodecahedron. An almost perfect pattern suggests that this is a biologically significant way of representing the genetic code.

  8. Ethical Codes: A Standard for Ethical Behavior.

    ERIC Educational Resources Information Center

    Egan, Katherine

    1990-01-01

    Examines the codes of ethics of three major education associations (the National Association of Secondary School Principals, the National Education Association, and the American Association of School Administrators) and their usefulness in developing a school-specific code. The codes' language reveals how these organizations think about students,…

  9. HERCULES: A Pattern Driven Code Transformation System

    SciTech Connect

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing; Ilsche, Thomas; Joubert, Wayne; Graham, Richard L

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss the design, implementation and an initial evaluation of HERCULES.

  10. A Code of Practice for Further Education.

    ERIC Educational Resources Information Center

    Walker, Liz; Turner, Anthea

    This draft is the outcome of a project in which colleges and further education (FE) teacher education providers worked to pilot a code developed by students and staff at Loughborough College in England. The code is intended to be a resource for improving practice and enhancing the standing of the FE sector. It focuses on the essentials, affirms…

  11. A thesaurus for a neural population code.

    PubMed

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-09-08

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns.

  12. Source Code Plagiarism--A Student Perspective

    ERIC Educational Resources Information Center

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  13. A distributed particle simulation code in C++

    SciTech Connect

    Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.

    1992-03-01

    Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.

  14. The Nuremberg Code-A critique.

    PubMed

    Ghooi, Ravindra B

    2011-04-01

    The Nuremberg Code drafted at the end of the Doctor's trial in Nuremberg 1947 has been hailed as a landmark document in medical and research ethics. Close examination of this code reveals that it was based on the Guidelines for Human Experimentation of 1931. The resemblance between these documents is uncanny. It is unfortunate that the authors of the Nuremberg Code passed it off as their original work. There is evidence that the defendants at the trial did request that their actions be judged on the basis of the 1931 Guidelines, in force in Germany. The prosecutors, however, ignored the request and tried the defendants for crimes against humanity, and the judges included the Nuremberg Code as a part of the judgment. Six of ten principles in Nuremberg Code are derived from the 1931 Guidelines, and two of four newly inserted principles are open to misinterpretation. There is little doubt that the Code was prepared after studying the Guidelines, but no reference was made to the Guidelines, for reasons that are not known. Using the Guidelines as a base document without giving due credit is plagiarism; as per our understanding of ethics today, this would be considered unethical. The Nuremberg Code has fallen by the wayside; since unlike the Declaration of Helsinki, it is not regularly reviewed and updated. The regular updating of some ethics codes is evidence of the evolving nature of human ethics. PMID:21731859

  15. A new algorithm for coding geological terminology

    NASA Astrophysics Data System (ADS)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  16. Do plant cell walls have a code?

    PubMed

    Tavares, Eveline Q P; Buckeridge, Marcos S

    2015-12-01

    A code is a set of rules that establish correspondence between two worlds, signs (consisting of encrypted information) and meaning (of the decrypted message). A third element, the adaptor, connects both worlds, assigning meaning to a code. We propose that a Glycomic Code exists in plant cell walls where signs are represented by monosaccharides and phenylpropanoids and meaning is cell wall architecture with its highly complex association of polymers. Cell wall biosynthetic mechanisms, structure, architecture and properties are addressed according to Code Biology perspective, focusing on how they oppose to cell wall deconstruction. Cell wall hydrolysis is mainly focused as a mechanism of decryption of the Glycomic Code. Evidence for encoded information in cell wall polymers fine structure is highlighted and the implications of the existence of the Glycomic Code are discussed. Aspects related to fine structure are responsible for polysaccharide packing and polymer-polymer interactions, affecting the final cell wall architecture. The question whether polymers assembly within a wall display similar properties as other biological macromolecules (i.e. proteins, DNA, histones) is addressed, i.e. do they display a code?

  17. A Fortran 90 code for magnetohydrodynamics

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  18. Report on a workshop concerning code validation

    SciTech Connect

    1996-12-01

    The design of wind turbine components is becoming more critical as turbines become lighter and more dynamically active. Computer codes that will reliably predict turbine dynamic response are, therefore, more necessary than before. However, predicting the dynamic response of very slender rotating structures that operate in turbulent winds is not a simple matter. Even so, codes for this purpose have been developed and tested in North America and in Europe, and it is important to disseminate information on this subject. The purpose of this workshop was to allow those involved in the wind energy industry in the US to assess the progress invalidation of the codes most commonly used for structural/aero-elastic wind turbine simulation. The theme of the workshop was, ``How do we know it`s right``? This was the question that participants were encouraged to ask themselves throughout the meeting in order to avoid the temptation of presenting information in a less-than-critical atmosphere. Other questions posed at the meeting are: What is the proof that the codes used can truthfully represent the field data? At what steps were the codes tested against known solutions, or against reliable field data? How should the designer or user validate results? What computer resources are needed? How do codes being used in Europe compare with those used in the US? How does the code used affect industry certification? What can be expected in the future?

  19. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.

    1991-01-01

    We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  20. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  1. The genetic code as a periodic table.

    PubMed

    Jungck, J R

    1978-08-01

    The contemporary genetic code is reflective of a significant correlation between the properties of amino acids and their anticodons in a periodic manner. Almost all properties of amino acids showed a greater correlation to anticondonic than to codonic dinucleoside monophosphate properties. The polarity and bulkiness of amino acid side chains can be used to predict the anticodon with considerable confidence. The results are most consistent with predictions of the "direct interaction" and "ambiguity reduction" hypotheses for the origin of the genetic code.

  2. A database coding system for vascular procedures.

    PubMed

    Harris, K A; DeRose, G; Jamieson, W

    1991-01-01

    A coding system was developed to overcome the difficulties encountered in data registry and retrieval from a national audit. In vascular surgery operations are frequently combined, and neither the OHIP fee schedule of codes (Ontario, Canada) nor the ICD-9 system provides sufficient detail for most vascular surgeons to retrieve information for long-term follow-up. However, some wish to record minimal data on their operative procedures. A numeric classification system was developed. A five-digit number is used, the first two digits classifying the operative procedure and anatomic details. Two decimal digits code the classification of operation (e.g., aortic aneurysm, tube graft, aortoiliac, or aortobifemoral) and the final digit may be used as a modifier. "Holes" in the numeric system allow for new operations to be added as they develop. Codes are stored in a database with the following fields: 1) codes; 2) description of operation; 3) translation. The translation field may be modified to permit translation of any existing databases into the system. This database has been distributed with a data registry program free of charge to vascular surgeons in Canada to allow nationwide registry of vascular surgery patients. A numeric code eliminates spelling and abbreviation errors, and can be sufficiently broad-based to allow all surgeons to participate in a nationwide audit.

  3. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  4. Predictive coding as a model of cognition.

    PubMed

    Spratling, M W

    2016-08-01

    Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562

  5. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  6. A thesaurus for a neural population code

    PubMed Central

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-01-01

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI: http://dx.doi.org/10.7554/eLife.06134.001 PMID:26347983

  7. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  8. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  9. Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom

    ERIC Educational Resources Information Center

    Lin, Zheng

    2012-01-01

    This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…

  10. Academic Integrity in Honor Code and Non-Honor Code Environments: A Qualitative Investigation.

    ERIC Educational Resources Information Center

    McCabe, Donald; Trevino, Linda Klebe; Butterfield, Kenneth D.

    1999-01-01

    Survey data from 4,285 students in 31 colleges and universities indicates students at schools with academic honor codes view the issue of academic integrity in a fundamentally different way than do students at non-honor code institutions. This difference seems to stem from the presence of an honor code and its influence on the way students think…

  11. FREEFALL: A seabed penetrator flight code

    SciTech Connect

    Hickerson, J.

    1988-01-01

    This report presents a one-dimensional model and computer program for predicting the motion of seabed penetrators. The program calculates the acceleration, velocity, and depth of a penetrator as a function of time from the moment of launch until the vehicle comes to rest in the sediment. The code is written in Pascal language for use on a small personal computer. Results are presented as printed tables and graphs. A comparison with experimental data is given which indicates that the accuracy of the code is perhaps as good as current techniques for measuring vehicle performance. 31 refs., 12 figs., 5 tabs.

  12. A coding procedure for teachers.

    PubMed

    Kubany, E S; Sloggett, B B

    1973-01-01

    An observational technique for reliably estimating the per cent of time a student engages in appropriate and inappropriate classroom behavior is described. The regular classroom teacher can utilize the procedure without deviating from regular routine, and the obtained data can serve as a basis for dispensing token reinforcement. PMID:16795415

  13. DUNE - a granular flow code

    SciTech Connect

    Slone, D M; Cottom, T L; Bateson, W B

    2004-11-23

    DUNE was designed to accurately model the spectrum of granular. Granular flow encompasses the motions of discrete particles. The particles are macroscopic in that there is no Brownian motion. The flow can be thought of as a dispersed phase (the particles) interacting with a fluid phase (air or water). Validation of the physical models proceeds in tandem with simple experimental confirmation. The current development team is working toward the goal of building a flexible architecture where existing technologies can easily be integrated to further the capability of the simulation. We describe the DUNE architecture in some detail using physics models appropriate for an imploding liner experiment.

  14. Odor Coding by a Mammalian Receptor Repertoire

    PubMed Central

    Saito, Harumi; Chi, Qiuyi; Zhuang, Hanyi; Matsunami, Hiro; Mainland, Joel D.

    2009-01-01

    Deciphering olfactory encoding requires a thorough description of the ligands that activate each odorant receptor (OR). In mammalian systems, however, ligands are known for fewer than 50 of over 1400 human and mouse ORs, greatly limiting our understanding of olfactory coding. We performed high-throughput screening of 93 odorants against 464 ORs expressed in heterologous cells and identified agonists for 52 mouse and 10 human ORs. We used the resulting interaction profiles to develop a predictive model relating physicochemical odorant properties, OR sequences, and their interactions. Our results provide a basis for translating odorants into receptor neuron responses and unraveling mammalian odor coding. PMID:19261596

  15. A MULTIPURPOSE COHERENT INSTABILITY SIMULATION CODE

    SciTech Connect

    BLASKIEWICZ,M.

    2007-06-25

    A multipurpose coherent instability simulation code has been written, documented, and released for use. TRANFT (tran-eff-tee) uses fast Fourier transforms to model transverse wakefields, transverse detuning wakes and longitudinal wakefields in a computationally efficient way. Dual harmonic RF allows for the study of enhanced synchrotron frequency spread. When coupled with chromaticity, the theoretically challenging but highly practical post head-tail regime is open to study. Detuning wakes allow for transverse space charge forces in low energy hadron beams, and a switch allowing for radiation damping makes the code useful for electrons.

  16. Design of a physical format coding system

    NASA Astrophysics Data System (ADS)

    Hu, Beibei; Pei, Jing; Zhang, Qicheng; Liu, Hailong; Tang, Yi

    2008-12-01

    A novel design of physical format coding system (PFCS) is presented based on Multi-level read-only memory disc (ML ROM) in order to solve the problem of low efficiency and long period of disc testing during system development. The PFCS is composed of four units, which are 'Encode', 'Add Noise', 'Decode', 'Error Rate', and 'Information'. It is developed with MFC under the environment of VC++ 6.0, and capable to visually simulate the procedure of data processing for ML ROM. This system can also be used for developing other optical disc storage system or similar channel coding system.

  17. Finding the Key to a Better Code: Code Team Restructure to Improve Performance and Outcomes

    PubMed Central

    Prince, Cynthia R.; Hines, Elizabeth J.; Chyou, Po-Huang; Heegeman, David J.

    2014-01-01

    Code teams respond to acute life threatening changes in a patient’s status 24 hours a day, 7 days a week. If any variable, whether a medical skill or non-medical quality, is lacking, the effectiveness of a code team’s resuscitation could be hindered. To improve the overall performance of our hospital’s code team, we implemented an evidence-based quality improvement restructuring plan. The code team restructure, which occurred over a 3-month period, included a defined number of code team participants, clear identification of team members and their primary responsibilities and position relative to the patient, and initiation of team training events and surprise mock codes (simulations). Team member assessments of the restructured code team and its performance were collected through self-administered electronic questionnaires. Time-to-defibrillation, defined as the time the code was called until the start of defibrillation, was measured for each code using actual time recordings from code summary sheets. Significant improvements in team member confidence in the skills specific to their role and clarity in their role’s position were identified. Smaller improvements were seen in team leadership and reduction in the amount of extra talking and noise during a code. The average time-to-defibrillation during real codes decreased each year since the code team restructure. This type of code team restructure resulted in improvements in several areas that impact the functioning of the team, as well as decreased the average time-to-defibrillation, making it beneficial to many, including the team members, medical institution, and patients. PMID:24667218

  18. Stereo image coding: a projection approach.

    PubMed

    Aydinoğlu, H; Hayes, M H

    1998-01-01

    Recently, due to advances in display technology, three-dimensional (3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pair, data compression algorithms should be employed to represent stereo pairs efficiently. This paper focuses on the stereo image coding problem. We begin with a description of the problem and a survey of current stereo coding techniques. A new stereo image coding algorithm that is based on disparity compensation and subspace projection is described. This algorithm, the subspace projection technique (SPT), is a transform domain approach with a space-varying transformation matrix and may be interpreted as a spatial-transform domain representation of the stereo data. The advantage of the proposed approach is that it can locally adapt to the changes in the cross-correlation characteristics of the stereo pairs. Several design issues and implementations of the algorithm are discussed. Finally, we present empirical results suggesting that the SPT approach outperforms current stereo coding techniques. PMID:18276269

  19. A Code of Ethics for Democratic Leadership

    ERIC Educational Resources Information Center

    Molina, Ricardo; Klinker, JoAnn Franklin

    2012-01-01

    Democratic leadership rests on sacred values, awareness, judgement, motivation and courage. Four turning points in a 38-year school administrator's career revealed decision-making in problematic moments stemmed from values in a personal and professional code of ethics. Reflection on practice and theory added vocabulary and understanding to make…

  20. TACO: a finite element heat transfer code

    SciTech Connect

    Mason, W.E. Jr.

    1980-02-01

    TACO is a two-dimensional implicit finite element code for heat transfer analysis. It can perform both linear and nonlinear analyses and can be used to solve either transient or steady state problems. Either plane or axisymmetric geometries can be analyzed. TACO has the capability to handle time or temperature dependent material properties and materials may be either isotropic or orthotropic. A variety of time and temperature dependent loadings and boundary conditions are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additionally, TACO has some specialized features such as internal surface conditions (e.g., contact resistance), bulk nodes, enclosure radiation with view factor calculations, and chemical reactive kinetics. A user subprogram feature allows for any type of functional representation of any independent variable. A bandwidth and profile minimization option is also available in the code. Graphical representation of data generated by TACO is provided by a companion post-processor named POSTACO. The theory on which TACO is based is outlined, the capabilities of the code are explained, the input data required to perform an analysis with TACO are described. Some simple examples are provided to illustrate the use of the code.

  1. CHEETAH: A next generation thermochemical code

    SciTech Connect

    Fried, L.; Souers, P.

    1994-11-01

    CHEETAH is an effort to bring the TIGER thermochemical code into the 1990s. A wide variety of improvements have been made in Version 1.0. We have improved the robustness and ease of use of TIGER. All of TIGER`s solvers have been replaced by new algorithms. We find that CHEETAH solves a wider variety of problems with no user intervention (e.g. no guesses for the C-J state) than TIGER did. CHEETAH has been made simpler to use than TIGER; typical use of the code occurs with the new standard run command. CHEETAH will make the use of thermochemical codes more attractive to practical explosive formulators. We have also made an extensive effort to improve over the results of TIGER. CHEETAH`s version of the BKW equation of state (BKWC) is able to accurately reproduce energies from cylinder tests; something that other BKW parameter sets have been unable to do. Calculations performed with BKWC execute very quickly; typical run times are under 10 seconds on a workstation. In the future we plan to improve the underlying science in CHEETAH. More accurate equations of state will be used in the gas and the condensed phase. A kinetics capability will be added to the code that will predict reaction zone thickness. Further ease of use features will eventually be added; an automatic formulator that adjusts concentrations to match desired properties is planned.

  2. FLUKA: A Multi-Particle Transport Code

    SciTech Connect

    Ferrari, A.; Sala, P.R.; Fasso, A.; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  3. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  4. Towards a biological coding theory discipline.

    SciTech Connect

    May, Elebeoba Eni

    2003-09-01

    How can information required for the proper functioning of a cell, an organism, or a species be transmitted in an error-introducing environment? Clearly, similar to engineering communication systems, biological systems must incorporate error control in their information transmissino processes. if genetic information in the DNA sequence is encoded in a manner similar to error control encoding, the received sequence, the messenger RNA (mRNA) can be analyzed using coding theory principles. This work explores potential parallels between engineering communication systems and the central dogma of genetics and presents a coding theory approach to modeling the process of protein translation initiation. The messenger RNA is viewed as a noisy encoded sequence and the ribosoe as an error control decoder. Decoding models based on chemical and biological characteristics of the ribosome and the ribosome binding site of the mRNA are developed and results of applying the models to the Escherichia coli K-12 are presented.

  5. CAFE: A New Relativistic MHD Code

    NASA Astrophysics Data System (ADS)

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.

    2015-06-01

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  6. CAFE: A NEW RELATIVISTIC MHD CODE

    SciTech Connect

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S. E-mail: aosorio@astro.unam.mx

    2015-06-22

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin–Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin–Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  7. OSCAR a Matlab based optical FFT code

    NASA Astrophysics Data System (ADS)

    Degallaix, Jérôme

    2010-05-01

    Optical simulation softwares are essential tools for designing and commissioning laser interferometers. This article aims to introduce OSCAR, a Matlab based FFT code, to the experimentalist community. OSCAR (Optical Simulation Containing Ansys Results) is used to simulate the steady state electric fields in optical cavities with realistic mirrors. The main advantage of OSCAR over other similar packages is the simplicity of its code requiring only a short time to master. As a result, even for a beginner, it is relatively easy to modify OSCAR to suit other specific purposes. OSCAR includes an extensive manual and numerous detailed examples such as simulating thermal aberration, calculating cavity eigen modes and diffraction loss, simulating flat beam cavities and three mirror ring cavities. An example is also provided about how to run OSCAR on the GPU of modern graphic cards instead of the CPU, making the simulation up to 20 times faster.

  8. TAIR: A transonic airfoil analysis computer code

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.

    1981-01-01

    The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.

  9. A turbulence module for the NPARC code

    NASA Technical Reports Server (NTRS)

    Zhu, J.; Shih, T.-H.

    1995-01-01

    A turbulence module is developed for the 2D version of the NPARC code which is currently restricted to planar or axisymmetric flows without swirling. Four turbulence models have been built into the module: Baldwin-Lomax, Chien, Shih-Lumley and CMOTT models. The first is a mixing-length eddy-viscosity model which is mainly used for initialization of computational fields and the last three are the low Reynolds number two-equation models. Unlike chien's model, both the Shih-Lumley and CMOTT models do not involve the dimensionless wall distance y(sup +), an advantage for separated flow calculations. Contrary to the NPARC and most other compressible codes, the non-delta form of transport equations is used which leads to a simpler linearization and is more effective than using the delta form in ensuring the positiveness of the turbulent kinetic energy and its dissipation rate. To reduce numerical diffusion while maintaining necessary stability, a second-order accurate and bounded scheme is used for the convective terms of the turbulent transport equations. This scheme is implemented in a deferred correction manner so that the main coefficients of the resulting difference equations are always positive, thus making the numerical solutions process unconditionally stable. The system of equations is solved via a decoupled method and by the alternating direction TDMA of Thomas. The module can be easily linked to the NPARC code for turbulent flow calculations.

  10. Xenomicrobiology: a roadmap for genetic code engineering.

    PubMed

    Acevedo-Rocha, Carlos G; Budisa, Nediljko

    2016-09-01

    Biology is an analytical and informational science that is becoming increasingly dependent on chemical synthesis. One example is the high-throughput and low-cost synthesis of DNA, which is a foundation for the research field of synthetic biology (SB). The aim of SB is to provide biotechnological solutions to health, energy and environmental issues as well as unsustainable manufacturing processes in the frame of naturally existing chemical building blocks. Xenobiology (XB) goes a step further by implementing non-natural building blocks in living cells. In this context, genetic code engineering respectively enables the re-design of genes/genomes and proteins/proteomes with non-canonical nucleic (XNAs) and amino (ncAAs) acids. Besides studying information flow and evolutionary innovation in living systems, XB allows the development of new-to-nature therapeutic proteins/peptides, new biocatalysts for potential applications in synthetic organic chemistry and biocontainment strategies for enhanced biosafety. In this perspective, we provide a brief history and evolution of the genetic code in the context of XB. We then discuss the latest efforts and challenges ahead for engineering the genetic code with focus on substitutions and additions of ncAAs as well as standard amino acid reductions. Finally, we present a roadmap for the directed evolution of artificial microbes for emancipating rare sense codons that could be used to introduce novel building blocks. The development of such xenomicroorganisms endowed with a 'genetic firewall' will also allow to study and understand the relation between code evolution and horizontal gene transfer. PMID:27489097

  11. Xenomicrobiology: a roadmap for genetic code engineering.

    PubMed

    Acevedo-Rocha, Carlos G; Budisa, Nediljko

    2016-09-01

    Biology is an analytical and informational science that is becoming increasingly dependent on chemical synthesis. One example is the high-throughput and low-cost synthesis of DNA, which is a foundation for the research field of synthetic biology (SB). The aim of SB is to provide biotechnological solutions to health, energy and environmental issues as well as unsustainable manufacturing processes in the frame of naturally existing chemical building blocks. Xenobiology (XB) goes a step further by implementing non-natural building blocks in living cells. In this context, genetic code engineering respectively enables the re-design of genes/genomes and proteins/proteomes with non-canonical nucleic (XNAs) and amino (ncAAs) acids. Besides studying information flow and evolutionary innovation in living systems, XB allows the development of new-to-nature therapeutic proteins/peptides, new biocatalysts for potential applications in synthetic organic chemistry and biocontainment strategies for enhanced biosafety. In this perspective, we provide a brief history and evolution of the genetic code in the context of XB. We then discuss the latest efforts and challenges ahead for engineering the genetic code with focus on substitutions and additions of ncAAs as well as standard amino acid reductions. Finally, we present a roadmap for the directed evolution of artificial microbes for emancipating rare sense codons that could be used to introduce novel building blocks. The development of such xenomicroorganisms endowed with a 'genetic firewall' will also allow to study and understand the relation between code evolution and horizontal gene transfer.

  12. A novel orientation code for face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2011-06-01

    A novel orientation code is proposed for face recognition applications in this paper. Gabor wavelet transform is a common tool for orientation analysis in a 2D image; whereas Hamming distance is an efficient distance measurement for multiple classifications such as face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiple-band orientation codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB has the dimensionality of 8 bits per pixel and its performance will be compared to that of FPW (face pattern word, 32 bits per pixel). The dimensionality of FPB can be further reduced down to 4 bits per pixel, called face pattern nibble (FPN). Experimental results with visible and thermal face databases show that the proposed orientation code for face recognition is very promising in contrast with classical methods such as PCA.

  13. A Construction of MDS Quantum Convolutional Codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2015-09-01

    In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.

  14. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  15. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  16. A Readout Mechanism for Latency Codes

    PubMed Central

    Zohar, Oran; Shamir, Maoz

    2016-01-01

    Response latency has been suggested as a possible source of information in the central nervous system when fast decisions are required. The accuracy of latency codes was studied in the past using a simplified readout algorithm termed the temporal-winner-take-all (tWTA). The tWTA is a competitive readout algorithm in which populations of neurons with a similar decision preference compete, and the algorithm selects according to the preference of the population that reaches the decision threshold first. It has been shown that this algorithm can account for accurate decisions among a small number of alternatives during short biologically relevant time periods. However, one of the major points of criticism of latency codes has been that it is unclear how can such a readout be implemented by the central nervous system. Here we show that the solution to this long standing puzzle may be rather simple. We suggest a mechanism that is based on reciprocal inhibition architecture, similar to that of the conventional winner-take-all, and show that under a wide range of parameters this mechanism is sufficient to implement the tWTA algorithm. This is done by first analyzing a rate toy model, and demonstrating its ability to discriminate short latency differences between its inputs. We then study the sensitivity of this mechanism to fine-tuning of its initial conditions, and show that it is robust to wide range of noise levels in the initial conditions. These results are then generalized to a Hodgkin-Huxley type of neuron model, using numerical simulations. Latency codes have been criticized for requiring a reliable stimulus-onset detection mechanism as a reference for measuring latency. Here we show that this frequent assumption does not hold, and that, an additional onset estimator is not needed to trigger this simple tWTA mechanism. PMID:27812332

  17. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    SciTech Connect

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  18. Description of a quantum convolutional code.

    PubMed

    Ollivier, Harold; Tillich, Jean-Pierre

    2003-10-24

    We describe a quantum error correction scheme aimed at protecting a flow of quantum information over long distance communication. It is largely inspired by the theory of classical convolutional codes which are used in similar circumstances in classical communication. The particular example shown here uses the stabilizer formalism. We provide an explicit encoding circuit and its associated error estimation algorithm. The latter gives the most likely error over any memoryless quantum channel, with a complexity growing only linearly with the number of encoded qubits.

  19. Visual mismatch negativity: a predictive coding view

    PubMed Central

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control “refractoriness” effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  20. A Code of Ethics for Referees?

    NASA Astrophysics Data System (ADS)

    Sturrock, Peter A.

    2004-04-01

    I have read with interest the many letters commenting on the pros and cons of anonymity for referees. While I sympathize with writers who have suffered from referees who are incompetent or uncivil, I also sympathize with those who argue that one would simply exchange one set of problems for another if journals were to require that all referees waive anonymity. Perhaps there is a more direct way to address the issue. It may help if guidelines for referees were to include a code of ethics.

  1. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 2; Codes for AWGN and Fading Channels

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, DoJun; Lin, Shu

    1997-01-01

    In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.

  2. CHEETAH: A fast thermochemical code for detonation

    SciTech Connect

    Fried, L.E.

    1993-11-01

    For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producing a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.

  3. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel A. Lazerson, S. Sakakibara and Y. Suzuki

    2013-03-12

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The code is validated against a vacuum shot on the Large Helical Device (LHD) where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the LHD.

  4. A Magnetic Diagnostic Code for 3D Fusion Equilibria

    SciTech Connect

    Samuel Aaron Lazerson

    2012-07-27

    A synthetic magnetic diagnostics code for fusion equilibria is presented. This code calculates the response of various magnetic diagnostics to the equilibria produced by the VMEC and PIES codes. This allows for treatment of equilibria with both good nested flux surfaces and those with stochastic regions. DIAGNO v2.0 builds upon previous codes through the implementation of a virtual casing principle. The codes is validated against a vacuum shot on the Large Helical Device where the vertical field was ramped. As an exercise of the code, the diagnostic response for various equilibria are calculated on the Large Helical Device (LHD).

  5. Optix: A Monte Carlo scintillation light transport code

    NASA Astrophysics Data System (ADS)

    Safari, M. J.; Afarideh, H.; Ghal-Eh, N.; Davani, F. Abbasi

    2014-02-01

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements.

  6. A Comparative Study on Seismic Analysis of Bangladesh National Building Code (BNBC) with Other Building Codes

    NASA Astrophysics Data System (ADS)

    Bari, Md. S.; Das, T.

    2013-09-01

    Tectonic framework of Bangladesh and adjoining areas indicate that Bangladesh lies well within an active seismic zone. The after effect of earthquake is more severe in an underdeveloped and a densely populated country like ours than any other developed countries. Bangladesh National Building Code (BNBC) was first established in 1993 to provide guidelines for design and construction of new structure subject to earthquake ground motions in order to minimize the risk to life for all structures. A revision of BNBC 1993 is undergoing to make this up to date with other international building codes. This paper aims at the comparison of various provisions of seismic analysis as given in building codes of different countries. This comparison will give an idea regarding where our country stands when it comes to safety against earth quake. Primarily, various seismic parameters in BNBC 2010 (draft) have been studied and compared with that of BNBC 1993. Later, both 1993 and 2010 edition of BNBC codes have been compared graphically with building codes of other countries such as National Building Code of India 2005 (NBC-India 2005), American Society of Civil Engineering 7-05 (ASCE 7-05). The base shear/weight ratios have been plotted against the height of the building. The investigation in this paper reveals that BNBC 1993 has the least base shear among all the codes. Factored Base shear values of BNBC 2010 are found to have increased significantly than that of BNBC 1993 for low rise buildings (≤20 m) around the country than its predecessor. Despite revision of the code, BNBC 2010 (draft) still suggests less base shear values when compared to the Indian and American code. Therefore, this increase in factor of safety against the earthquake imposed by the proposed BNBC 2010 code by suggesting higher values of base shear is appreciable.

  7. Containment Fire Simulation by a CFD Code

    SciTech Connect

    Heitsch, Matthias

    2002-07-01

    In the frame of an international collaborative project to evaluate fire models a code benchmark was initiated to better quantify the strengths and weaknesses of the codes involved. CFX has been applied to simulate selected cases of both parts of the benchmark. These simulations are presented and discussed in this paper. In the first part of the benchmark a pool fire just represented by a heat release table is considered. Consequently, the physical fire model within CFX is simple. Radiative heat exchange together with turbulent mixing are involved. Two cases with and without venting of the fire room are compared. The second part of the benchmark requires a more detailed fire model in order to inspect the availability of oxygen locally and to control the fire intensity. Under unvented conditions oxygen starvation is encountered and the fire oscillates. Mechanical ventilation changes this behavior and provides enough oxygen all over the simulation time. The predefined damage criteria to characterize, if a target cable in the fire room would be damaged, are not met. However, surface temperatures predicted are well above the assumed threshold temperatures. A continuation of the work presented is foreseen and will address a more complex physical modeling of relevant fire scenarios. (author)

  8. Exploring a Code's Material Properties Capability

    NASA Astrophysics Data System (ADS)

    Kaul, Ann

    2011-06-01

    LANL is moving its simulation workload to the laboratory's 2- and 3-D ASC hydrodynamic codes. Aggressive validation of these material simulation capabilities against experimental data is underway. Choosing appropriate material properties models and parameter values for a simulation is an area of particular concern. To address this issue, each material and experiment combination should be systematically examined through a set of code simulations. In addition to comparing competing materials models, the effect of simulation choices such as mesh size and ALE schemes for mesh untangling needs to be explored. Thoroughly understanding how such choices affect the calculated results of single physics simulations provides a user with a well-informed basis from which to ascertain how accurately a more complicated simulation portrays physical reality. Results for Lagrangian/ALE simulations of some experiments which are typically used for validation of strength and damage models will be presented. These material processes are the result of significant localization of strain and stress, which can be difficult to capture adequately on a finite-size mesh. Modeled strength experiments may include the lower strain rate (~104 s-1) gas gun driven Taylor impacts, the higher strain rate (~105 - 106 s-1) HE products driven perturbed plates, and the high shear tophats. Modeled damage experiments may include gas-gun driven flyer plates and electro-magnetically-driven cylindrical configurations.

  9. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  10. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  11. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  12. Python interface generator for Fortran based codes (a code development aid)

    SciTech Connect

    Grote, D. P.

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  13. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2011-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  14. What to do with a Dead Research Code

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  15. A new three-dimensional general-relativistic hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Baiotti, L.; Hawke, I.; Montero, P. J.; Rezzolla, L.

    We present a new three-dimensional general relativistic hydrodynamics code, the Whisky code. This code incorporates the expertise developed over the past years in the numerical solution of Einstein equations and of the hydrodynamics equations in a curved spacetime, and is the result of a collaboration of several European Institutes. We here discuss the ability of the code to carry out long-term accurate evolutions of the linear and nonlinear dynamics of isolated relativistic stars.

  16. From Local Similarities to Global Coding: A Framework for Coding Applications.

    PubMed

    Shaban, Amirreza; Rabiee, Hamid R; Najibi, Mahyar; Yousefi, Safoora

    2015-12-01

    Feature coding has received great attention in recent years as a building block of many image processing algorithms. In particular, the importance of the locality assumption in coding approaches has been studied in many previous works. We review this assumption and claim that using the similarity of data points to a more global set of anchor points does not necessarily weaken the coding method, as long as the underlying structure of the anchor points is considered. We propose to capture the underlying structure by assuming a random walker over the anchor points. We also show that our method is a fast approximation to the diffusion map kernel. Experiments on various data sets show that with a knowledge of the underlying structure of anchor points, different state-of-the-art coding algorithms may boost their performance in different learning tasks by utilizing the proposed method. PMID:26259084

  17. A burst-correcting algorithm for Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Chen, J.; Owsley, P.

    1990-01-01

    The Bose, Chaudhuri, and Hocquenghem (BCH) codes form a large class of powerful error-correcting cyclic codes. Among the non-binary BCH codes, the most important subclass is the Reed Solomon (RS) codes. Reed Solomon codes have the ability to correct random and burst errors. It is well known that an (n,k) RS code can correct up to (n-k)/2 random errors. When burst errors are involved, the error correcting ability of the RS code can be increased beyond (n-k)/2. It has previously been show that RS codes can reliably correct burst errors of length greater than (n-k)/2. In this paper, a new decoding algorithm is given which can also correct a burst error of length greater than (n-k)/2.

  18. A simple double error correcting BCH codes

    NASA Astrophysics Data System (ADS)

    Sinha, V.

    1983-07-01

    With the availability of various cost effective digital hardware components, error correcting codes are realized in hardware in simpler fashion than was hitherto possible. Instead of computing error locations in BCH decoding by Berklekamp algorith, syndrome to error location mapping using an EPROM for double error correcting BCH code is described. The processing is parallel instead of serial. Possible applications are given.

  19. A code of ethics. The 1986 AANS presidential address.

    PubMed

    Patterson, R H

    1986-09-01

    More than 50 years after the founding of the American Association of Neurological Surgeons as the Harvey Cushing Society, the delineation of a Code of Ethics constitutes a milestone for neurosurgeons. The reasons for developing the Code, its historical basis, and the continuing need to reinterpret any code in view of continuing changes in neurosurgical practice and delivery of health care are discussed.

  20. CALIOP: a multichannel design code for gas-cooled fast reactors. Code description and user's guide

    SciTech Connect

    Thompson, W.I.

    1980-10-01

    CALIOP is a design code for fluid-cooled reactors composed of parallel fuel tubes in hexagonal or cylindrical ducts. It may be used with gaseous or liquid coolants. It has been used chiefly for design of a helium-cooled fast breeder reactor and has built-in cross section information to permit calculations of fuel loading, breeding ratio, and doubling time. Optional cross-section input allows the code to be used with moderated cores and with other fuels.

  1. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  2. Toward a Code of Conduct for Graduate Education

    ERIC Educational Resources Information Center

    Proper, Eve

    2012-01-01

    Most academic disciplines promulgate codes of ethics that serve as public statements of professional norms of their membership. These codes serve both symbolic and practical purposes, stating to both members and the larger public what a discipline's highest ethics are. This article explores what scholarly society codes of ethics could say about…

  3. A Reliability Study of BDAE-3 Discourse Coding

    ERIC Educational Resources Information Center

    Powell, Thomas W.

    2006-01-01

    The third edition of the "Boston Diagnostic Aphasia Examination" (Goodglass, Kaplan, and Barresi) introduced standardized procedures for coding discourse samples elicited using the well known Cookie Theft illustration. To evaluate the reliability of this discourse coding procedure, a transcribed sample was coded by 14 novice examiners using the…

  4. A Code for Probabilistic Safety Assessment

    1997-10-10

    An integrated fault-event tree software package PSAPACK was developed for level-1 PSA using personal computers. It is a menu driven interactive modular system which permits different choices, depending on the user's purposes and needs. The event tree development module is capable of developing the logic accident sequences based on the user's specified relations between event tree headings. Identification of success sequences and core damage sequences is done automatically by the code based on the successmore » function input by the user. It links minimum cut sets (MCS) from system fault trees and performs the Boolean reduction. It can also retrieve data from the reliability data base to perform the quantification of accident sequences.« less

  5. A Pilot Study of Bar Codes in a Canadian Hospital

    PubMed Central

    Brisseau, Lionel; Chiveri, Andrei; Lebel, Denis; Bussières, Jean-François

    2011-01-01

    Background: In 2004, the US Food and Drug Administration issued a new rule requiring most prescription and some over-the-counter pharmaceutical products to carry bar codes down to the level of individual doses, with the intent of reducing the number of medication errors. Despite these regulatory changes in the United States, Health Canada has not yet adopted any mandatory bar-coding of drugs. Objective: To evaluate the feasibility of using commercial bar codes for receipt and preparation of drug products and to evaluate the readability of the bar codes printed on various levels of drug packaging. Methods: This cross-sectional observational pilot study was conducted in the Pharmacy Department of a Canadian mother–child university hospital centre in July 2010. For the purposes of the study, research drugs and cytotoxic drugs in various storage areas, as well as locally compounded medications with bar codes generated in house, were excluded. For all other drug products, the presence or absence of bar codes was documented for each level of packaging, along with the trade and generic names, content (i.e., drug product), quantity of doses or level of packaging, therapeutic class (if applicable), type of bar code (1- or 2-dimensional symbology), alphanumeric value contained in the bar code, standard of reference used to generate the alphanumeric value (Universal Product Code [UPC], Global Trade Item Number [GTIN], or unknown), and readability of the bar codes by 2 scanners. Results: Only 33 (1.9%) of the 1734 products evaluated had no bar codes on any level of packaging. Of the 2875 levels of packaging evaluated, 2021 (70.3%) had at least one bar code. Of the 2384 bar codes evaluated, 2353 (98.7%) were linear (1-dimensional) and 31 (1.3%) were 2-dimensional. Well over three-quarters (2112 or 88.6%) of the evaluated bar codes were readable by at least 1 of the 2 scanners used in the study. Conclusions: On the basis of these results, bar-coding could be used for receipt

  6. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1992-01-01

    The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.

  7. A grouped binary time code for telemetry and space applications

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    A computer oriented time code designed for users with various time resolution requirements is presented. It is intended as a time code for spacecraft and ground applications where direct code compatibility with automatic data processing equipment is of primary consideration. The principal features of this time code are: byte oriented format, selectable resolution options (from seconds to nanoseconds); and long ambiguity period. The time code is compatible with the new data handling and management concepts such as the NASA End-to-End Data System and the Telemetry Data Packetization format.

  8. Benchmark study between FIDAP and a cellular automata code

    NASA Astrophysics Data System (ADS)

    Akau, R. L.; Stockman, H. W.

    A fluid flow benchmark exercise was conducted to compare results between a cellular automata code and FIDAP. Cellular automata codes are free from gridding constraints, and are generally used to model slow (Reynolds number approximately 1) flows around complex solid obstacles. However, the accuracy of cellular automata codes at higher Reynolds numbers, where inertial terms are significant, is not well-documented. In order to validate the cellular automata code, two fluids problems were investigated. For both problems, flow was assumed to be laminar, two-dimensional, isothermal, incompressible and periodic. Results showed that the cellular automata code simulated the overall behavior of the flow field.

  9. Circular code motifs in transfer and 16S ribosomal RNAs: a possible translation code in genes.

    PubMed

    Michel, Christian J

    2012-04-01

    In 1996, a common trinucleotide circular code, called X, is identified in genes of eukaryotes and prokaryotes (Arquès and Michel, 1996). This circular code X is a set of 20 trinucleotides allowing the reading frames in genes to be retrieved locally, i.e. anywhere in genes and in particular without start codons. This reading frame retrieval needs a window length l of 12 nucleotides (l ≥ 12). With a window length strictly less than 12 nucleotides (l < 12), some words of X, called ambiguous words, are found in the shifted frames (the reading frame shifted by one or two nucleotides) preventing the reading frame in genes to be retrieved. Since 1996, these ambiguous words of X were never studied. In the first part of this paper, we identify all the ambiguous words of the common trinucleotide circular code X. With a length l varying from 1 to 11 nucleotides, the type and the occurrence number (multiplicity) of ambiguous words of X are given in each shifted frame. Maximal ambiguous words of X, words which are not factors of another ambiguous words, are also determined. Two probability definitions based on these results show that the common trinucleotide circular code X retrieves the reading frame in genes with a probability of about 90% with a window length of 6 nucleotides, and a probability of 99.9% with a window length of 9 nucleotides (100% with a window length of 12 nucleotides, by definition of a circular code). In the second part of this paper, we identify X circular code motifs (shortly X motifs) in transfer RNA and 16S ribosomal RNA: a tRNA X motif of 26 nucleotides including the anticodon stem-loop and seven 16S rRNA X motifs of length greater or equal to 15 nucleotides. Window lengths of reading frame retrieval with each trinucleotide of these X motifs are also determined. Thanks to the crystal structure 3I8G (Jenner et al., 2010), a 3D visualization of X motifs in the ribosome shows several spatial configurations involving mRNA X motifs, A-tRNA and E-tRNA X

  10. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    SciTech Connect

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  11. Parallel processing a real code: A case history

    SciTech Connect

    Mandell, D.A.; Trease, H.E.

    1988-01-01

    A three-dimensional, time-dependent Free-Lagrange hydrodynamics code has been multitasked and autotasked on a Cray X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the Cray multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The 3-D algorithm has presented a number of problems that simpler algorithms, such as 1-D hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a Cray 1, to a multitasking code are discussed, Autotasking of a rewritten version of the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given. 8 refs., 13 figs.

  12. Speeding up a Lagrangian ice microphysics code

    NASA Astrophysics Data System (ADS)

    Unterstrasser, S.; Sölch, I.

    2013-07-01

    This paper presents various techniques to speed up the Lagrangian ice microphysics code EULAG-LCM. The amount of CPU time (and also memory and storage data) depends heavily on the number of simulation ice particles (SIPs) used to represent the bulk of real ice crystals. It was found that the various microphysical processes require different numbers of SIPs to reach statistical convergence (in a sense that a further increase of the SIP number does not systematically change the physical outcome of a cirrus simulation). Whereas deposition/sublimation and sedimentation require only a moderate number of SIPs, the (non-linear) ice nucleation process is only well represented, when a large number of SIPs is generated. We introduced a new stochastic nucleation implementation which reallistically mimics the stochastic nature of nucleation and greatly reduces numerical sensitivities. Furthermore several strategies (SIP merging and splitting) are presented which flexibly adjust and reduce the number of SIPs. These may well serve as an inspiration for developers of other Lagrangian particle tracking models. These efficiency measures reduce the computational costs of present cirrus studies and allow extending the temporal and spatial scales of upcoming studies.

  13. A code generation framework for the ALMA common software

    NASA Astrophysics Data System (ADS)

    Troncoso, Nicolás; von Brand, Horst H.; Ibsen, Jorge; Mora, Matias; Gonzalez, Victor; Chiozzi, Gianluca; Jeram, Bogdan; Sommer, Heiko; Zamora, Gabriel; Tejeda, Alexis

    2010-07-01

    Code generation helps in smoothing the learning curve of a complex application framework and in reducing the number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model Driven code generation to transform directly an UML Model into a full implementation in the ACS framework. This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are: software reuse, implicit application of design patterns and automatic tests generation. A model driven approach to design makes it also possible using the same model with different frameworks, by generating for different targets. The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator. OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct mapping of data types, the main difficulty encountered in the translation process. The output is an ACS application readily usable by the developer, including the necessary deployment configuration, thus minimizing any configuration burden during testing. The specific application code is implemented by extending generated classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process and aiding the developers by keeping a clean logical separation between the two. Our first results show that code generation improves dramatically the code productivity.

  14. A code inspection process for security reviews

    SciTech Connect

    Garzoglio, Gabriele; /Fermilab

    2009-05-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  15. A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Su, T. Y.; Kao, T. J.

    1991-01-01

    This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.

  16. Python interface generator for Fortran based codes (a code development aid)

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling codemore » can be written in the much more versatile Python language.« less

  17. A-to-I editing of coding and non-coding RNAs by ADARs.

    PubMed

    Nishikura, Kazuko

    2016-02-01

    Adenosine deaminases acting on RNA (ADARs) convert adenosine to inosine in double-stranded RNA. This A-to-I editing occurs not only in protein-coding regions of mRNAs, but also frequently in non-coding regions that contain inverted Alu repeats. Editing of coding sequences can result in the expression of functionally altered proteins that are not encoded in the genome, whereas the significance of Alu editing remains largely unknown. Certain microRNA (miRNA) precursors are also edited, leading to reduced expression or altered function of mature miRNAs. Conversely, recent studies indicate that ADAR1 forms a complex with Dicer to promote miRNA processing, revealing a new function of ADAR1 in the regulation of RNA interference. PMID:26648264

  18. A-to-I editing of coding and non-coding RNAs by ADARs

    PubMed Central

    Nishikura, Kazuko

    2016-01-01

    Adenosine deaminases acting on RNA (ADARs) convert adenosine to inosine in double-stranded RNA. This A-to-I editing occurs not only in protein-coding regions of mRNAs, but also frequently in non-coding regions that contain inverted Alu repeats. Editing of coding sequences can result in the expression of functionally altered proteins that are not encoded in the genome, whereas the significance of Alu editing remains largely unknown. Certain microRNA (miRNA) precursors are also edited, leading to reduced expression or altered function of mature miRNAs. Conversely, recent studies indicate that ADAR1 forms a complex with Dicer to promote miRNA processing, revealing a new function of ADAR1 in the regulation of RNA interference. PMID:26648264

  19. Para: a computer simulation code for plasma driven electromagnetic launchers

    SciTech Connect

    Thio, Y.-C.

    1983-03-01

    A computer code for simulation of rail-type accelerators utilizing a plasma armature has been developed and is described in detail. Some time varying properties of the plasma are taken into account in this code thus allowing the development of a dynamical model of the behavior of a plasma in a rail-type electromagnetic launcher. The code is being successfully used to predict and analyse experiments on small calibre rail-gun launchers.

  20. A Semantic Analysis Method for Scientific and Engineering Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  1. code_swarm: a design study in organic software visualization.

    PubMed

    Ogawa, Michael; Ma, Kwan-Liu

    2009-01-01

    In May of 2008, we published online a series of software visualization videos using a method called code_swarm. Shortly thereafter, we made the code open source and its popularity took off. This paper is a study of our code swarm application, comprising its design, results and public response. We share our design methodology, including why we chose the organic information visualization technique, how we designed for both developers and a casual audience, and what lessons we learned from our experiment. We validate the results produced by code_swarm through a qualitative analysis and by gathering online user comments. Furthermore, we successfully released the code as open source, and the software community used it to visualize their own projects and shared their results as well. In the end, we believe code_swarm has positive implications for the future of organic information design and open source information visualization practice.

  2. RAYS: a geometrical optics code for EBT

    SciTech Connect

    Batchelor, D.B.; Goldfinger, R.C.

    1982-04-01

    The theory, structure, and operation of the code are described. Mathematical details of equilibrium subroutiones for slab, bumpy torus, and tokamak plasma geometry are presented. Wave dispersion and absorption subroutines are presented for frequencies ranging from ion cyclotron frequency to electron cyclotron frequency. Graphics postprocessors for RAYS output data are also described.

  3. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  4. Theoretical Atomic Physics code development IV: LINES, A code for computing atomic line spectra

    SciTech Connect

    Abdallah, J. Jr.; Clark, R.E.H.

    1988-12-01

    A new computer program, LINES, has been developed for simulating atomic line emission and absorption spectra using the accurate fine structure energy levels and transition strengths calculated by the (CATS) Cowan Atomic Structure code. Population distributions for the ion stages are obtained in LINES by using the Local Thermodynamic Equilibrium (LTE) model. LINES is also useful for displaying the pertinent atomic data generated by CATS. This report describes the use of LINES. Both CATS and LINES are part of the Theoretical Atomic PhysicS (TAPS) code development effort at Los Alamos. 11 refs., 9 figs., 1 tab.

  5. Arithmetic coding as a non-linear dynamical system

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Vaidya, Prabhakar G.; Bhat, Kishor G.

    2009-04-01

    In order to perform source coding (data compression), we treat messages emitted by independent and identically distributed sources as imprecise measurements (symbolic sequence) of a chaotic, ergodic, Lebesgue measure preserving, non-linear dynamical system known as Generalized Luröth Series (GLS). GLS achieves Shannon's entropy bound and turns out to be a generalization of arithmetic coding, a popular source coding algorithm, used in international compression standards such as JPEG2000 and H.264. We further generalize GLS to piecewise non-linear maps (Skewed-nGLS). We motivate the use of Skewed-nGLS as a framework for joint source coding and encryption.

  6. CALMAR: A New Versatile Code Library for Adjustment from Measurements

    NASA Astrophysics Data System (ADS)

    Grégoire, G.; Fausser, C.; Destouches, C.; Thiollay, N.

    2016-02-01

    CALMAR, a new library for adjustment has been developed. This code performs simultaneous shape and level adjustment of an initial prior spectrum from measured reactions rates of activation foils. It is written in C++ using the ROOT data analysis framework,with all linear algebra classes. STAYSL code has also been reimplemented in this library. Use of the code is very flexible : stand-alone, inside a C++ code, or driven by scripts. Validation and test cases are under progress. Theses cases will be included in the code package that will be available to the community. Future development are discussed. The code should support the new Generalized Nuclear Data (GND) format. This new format has many advantages compared to ENDF.

  7. RESRAD-CHEM: A computer code for chemical risk assessment

    SciTech Connect

    Cheng, J.J.; Yu, C.; Hartmann, H.M.; Jones, L.G.; Biwer, B.M.; Dovel, E.S.

    1993-10-01

    RESRAD-CHEM is a computer code developed at Argonne National Laboratory for the U.S. Department of Energy to evaluate chemically contaminated sites. The code is designed to predict human health risks from multipathway exposure to hazardous chemicals and to derive cleanup criteria for chemically contaminated soils. The method used in RESRAD-CHEM is based on the pathway analysis method in the RESRAD code and follows the U.S. Environmental Protection Agency`s (EPA`s) guidance on chemical risk assessment. RESRAD-CHEM can be used to evaluate a chemically contaminated site and, in conjunction with the use of the RESRAD code, a mixed waste site.

  8. Documentation for RISKIN: A risk integration code for MACCS (MELCOR Accident Consequence Code System) output

    SciTech Connect

    Rollstin, J.A. ); Hong, Kou-John )

    1990-11-01

    This document has been prepared as a user's guide for the computer program RISKIN developed at Sandia National Laboratories. The RISKIN code generates integrated risk tables and the weighted mean risk associated with a user-selected set of consequences from up to five output files generated by the MELCOR Accident Consequence Code System (MACCS). Each MACCS output file can summarize the health and economic consequences resulting from up to 60 distinct severe accident source terms. Since the accident frequency associated with these source terms is not included as a MACCS input parameter a postprocessor is required to derived results that must incorporate accident frequency. The RISKIN code is such a postprocessor. RISKIN will search the MACCS output files for the mean and peak consequence values and the complementary cumulative distributive function (CCDF) tables for each requested consequence. Once obtained, RISKIN combines this data with accident frequency data to produce frequency weighted results. A postprocessor provides RISKIN an interface to the proprietary DISSPLA plot package. The RISKIN code has been written using ANSI Standard FORTRAN 77 to maximize its portability.

  9. Towards Realistic Implementations of a Majorana Surface Code.

    PubMed

    Landau, L A; Plugge, S; Sela, E; Altland, A; Albrecht, S M; Egger, R

    2016-02-01

    Surface codes have emerged as promising candidates for quantum information processing. Building on the previous idea to realize the physical qubits of such systems in terms of Majorana bound states supported by topological semiconductor nanowires, we show that the basic code operations, namely projective stabilizer measurements and qubit manipulations, can be implemented by conventional tunnel conductance probes and charge pumping via single-electron transistors, respectively. The simplicity of the access scheme suggests that a functional code might be in close experimental reach.

  10. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  11. Bar-Code System for a Microbiological Laboratory

    NASA Technical Reports Server (NTRS)

    Law, Jennifer; Kirschner, Larry

    2007-01-01

    A bar-code system has been assembled for a microbiological laboratory that must examine a large number of samples. The system includes a commercial bar-code reader, computer hardware and software components, plus custom-designed database software. The software generates a user-friendly, menu-driven interface.

  12. Ethical codes for attorneys: a brief introduction.

    PubMed

    Zarkowski, P

    1997-01-01

    Ethical standards for lawyers are contained in the Model Rules of Professional Conduct (which lays out both "shall/shall not" rules and "may" suggestions in nine broad areas) and the Model Code of Professional Responsibility (which covers essentially the same topic areas but offers more detailed commentary). Topics included in the Rules are the client-lawyer relationship, the attorney's role as an advocate and counselor, law firms and associations, public service, transactions with individuals other than clients and information about legal services including advertising, firm names, and letterhead. The American Dental Association's Principles of Ethics and Code of Professional Conduct is organized around the five ethical principles of patient autonomy, nonmaleficence, beneficence, justice, and veracity. There are substantial similarities in intent between the ethical standards of dentists and lawyers; there are also differences.

  13. CESAR: A Code for Nuclear Fuel and Waste Characterisation

    SciTech Connect

    Vidal, J.M.; Grouiller, J.P.; Launay, A.; Berthion, Y.; Marc, A.; Toubon, H.

    2006-07-01

    CESAR (Simplified Evolution Code Applied to Reprocessing) is a depletion code developed through a joint program between CEA and COGEMA. In the late 1980's, the first use of this code dealt with nuclear measurement at the Laboratories of the La Hague reprocessing plant. The use of CESAR was then extended to characterizations of all entrance materials and for characterisation, via tracer, of all produced waste. The code can distinguish more than 100 heavy nuclides, 200 fission products and 100 activation products, and it can characterise both the fuel and the structural material of the fuel. CESAR can also make depletion calculations from 3 months to 1 million years of cooling time. Between 2003-2005, the 5. version of the code was developed. The modifications were related to the harmonisation of the code's nuclear data with the JEF2.2 nuclear data file. This paper describes the code and explains the extensive use of this code at the La Hague reprocessing plant and also for prospective studies. The second part focuses on the modifications of the latest version, and describes the application field and the qualification of the code. Many companies and the IAEA use CESAR today. CESAR offers a Graphical User Interface, which is very user-friendly. (authors)

  14. A New Detailed Term Accounting Opacity Code: TOPAZ

    SciTech Connect

    Iglesias, C A; Chen, M H; Isaacs, W; Sonnad, V; Wilson, B G

    2004-04-28

    A new opacity code, TOPAZ, which explicitly includes configuration term structure in the bound-bound transitions is being developed. The goal is to extend the current capabilities of detailed term accounting opacity codes such as OPAL that are limited to lighter elements of astrophysical interest. At present, opacity calculations of heavier elements use statistical methods that rely on the presence of myriad spectral lines for accuracy. However, statistical approaches have been shown to be inadequate for astrophysical opacity calculations. An application of the TOPAZ code will be to study the limits of statistical methods. Comparisons of TOPAZ to other opacity codes as well as experiments are presented.

  15. A realistic model under which the genetic code is optimal.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  16. Roadmap Toward a Predictive Performance-based Commercial Energy Code

    SciTech Connect

    Rosenberg, Michael I.; Hart, Philip R.

    2014-10-01

    Energy codes have provided significant increases in building efficiency over the last 38 years, since the first national energy model code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, and the inability to handle control optimization that is specific to building type and use. This paper provides a high level review of different options for energy codes, including prescriptive, prescriptive packages, EUI Target, outcome-based, and predictive performance approaches. This paper also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria. A vision is outlined to serve as a roadmap for future commercial code development. That vision is based on code development being led by a specific approach to predictive energy performance combined with building specific prescriptive packages that are designed to be both cost-effective and to achieve a desired level of performance. Compliance with this new approach can be achieved by either meeting the performance target as demonstrated by whole building energy modeling, or by choosing one of the prescriptive packages.

  17. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  18. Wind turbine design codes: A comparison of the structural response

    SciTech Connect

    Buhl, M.L. Jr.; Wright, A.D.; Pierce, K.G.

    2000-03-01

    The National Wind Technology Center (NWTC) of the National Renewable Energy Laboratory is continuing a comparison of several computer codes used in the design and analysis of wind turbines. The second part of this comparison determined how well the programs predict the structural response of wind turbines. In this paper, the authors compare the structural response for four programs: ADAMS, BLADED, FAST{_}AD, and YawDyn. ADAMS is a commercial, multibody-dynamics code from Mechanical Dynamics, Inc. BLADED is a commercial, performance and structural-response code from Garrad Hassan and Partners Limited. FAST{_}AD is a structural-response code developed by Oregon State University and the University of Utah for the NWTC. YawDyn is a structural-response code developed by the University of Utah for the NWTC. ADAMS, FAST{_}AD, and YawDyn use the University of Utah's AeroDyn subroutine package for calculating aerodynamic forces. Although errors were found in all the codes during this study, once they were fixed, the codes agreed surprisingly well for most of the cases and configurations that were evaluated. One unresolved discrepancy between BLADED and the AeroDyn-based codes was when there was blade and/or teeter motion in addition to a large yaw error.

  19. Coding as a Trojan Horse for Mathematics Education Reform

    ERIC Educational Resources Information Center

    Gadanidis, George

    2015-01-01

    The history of mathematics educational reform is replete with innovations taken up enthusiastically by early adopters without significant transfer to other classrooms. This paper explores the coupling of coding and mathematics education to create the possibility that coding may serve as a Trojan Horse for mathematics education reform. That is,…

  20. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools. PMID:25910273

  1. A novel unified coding analytical method for Internet of Things

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Zhang, JianHong

    2013-08-01

    This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.

  2. Image compression using a novel edge-based coding algorithm

    NASA Astrophysics Data System (ADS)

    Keissarian, Farhad; Daemi, Mohammad F.

    2001-08-01

    In this paper, we present a novel edge-based coding algorithm for image compression. The proposed coding scheme is the predictive version of the original algorithm, which we presented earlier in literature. In the original version, an image is block coded according to the level of visual activity of individual blocks, following a novel edge-oriented classification stage. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly. In the present study, we extend and improve the performance of the existing technique by exploiting the expected spatial redundancy across the neighboring blocks. Satisfactory coded images at competitive bit rate with other block-based coding techniques have been obtained.

  3. Code generation: a strategy for neural network simulators.

    PubMed

    Goodman, Dan F M

    2010-10-01

    We demonstrate a technique for the design of neural network simulation software, runtime code generation. This technique can be used to give the user complete flexibility in specifying the mathematical model for their simulation in a high level way, along with the speed of code written in a low level language such as C+ +. It can also be used to write code only once but target different hardware platforms, including inexpensive high performance graphics processing units (GPUs). Code generation can be naturally combined with computer algebra systems to provide further simplification and optimisation of the generated code. The technique is quite general and could be applied to any simulation package. We demonstrate it with the 'Brian' simulator ( http://www.briansimulator.org ).

  4. Coded aperture imaging with a HURA coded aperture and a discrete pixel detector

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    An investigation into the gamma ray imaging properties of a hexagonal uniformly redundant array (HURA) coded aperture and a detector consisting of discrete pixels constituted the major research effort. Such a system offers distinct advantages for the development of advanced gamma ray astronomical telescopes in terms of the provision of high quality sky images in conjunction with an imager plane which has the capacity to reject background noise efficiently. Much of the research was performed as part of the European Space Agency (ESA) sponsored study into a prospective space astronomy mission, GRASP. The effort involved both computer simulations and a series of laboratory test images. A detailed analysis of the system point spread function (SPSF) of imaging planes which incorporate discrete pixel arrays is presented and the imaging quality quantified in terms of the signal to noise ratio (SNR). Computer simulations of weak point sources in the presence of detector background noise were also investigated. Theories developed during the study were evaluated by a series of experimental measurements with a Co-57 gamma ray point source, an Anger camera detector, and a rotating HURA mask. These tests were complemented by computer simulations designed to reproduce, as close as possible, the experimental conditions. The 60 degree antisymmetry property of HURA's was also employed to remove noise due to detector systematic effects present in the experimental images, and rendered a more realistic comparison of the laboratory tests with the computer simulations. Plateau removal and weighted deconvolution techniques were also investigated as methods for the reduction of the coding error noise associated with the gamma ray images.

  5. Electric utility value determination for wind energy. Volume II. A user's guide. [WTP code; WEIBUL code; ROSEN code; ULMOD code; FINAM code

    SciTech Connect

    Percival, David; Harper, James

    1981-02-01

    This report describes a method for determining the value of wind energy systems to electric utilities. It is performed by a package of computer models available from SERI that can be used with most utility planning models. The final output of these models gives a financial value ($/kW) of the wind energy system under consideration in the specific utility system. This volume, the second of two volumes, is a user's guide for the computer programs available from SERI. The first volume describes the value determination methodology and gives detailed discussion on each step of the computer modeling.

  6. Electric utility value determination for wind energy. Volume I. A methodology. [WTP code; WEIBUL code; ROSEN code; ULMOD code; FINAM code

    SciTech Connect

    Percival, David; Harper, James

    1981-02-01

    This report describes a method electric utilities can use to determine the value of wind energy systems. It is performed by a package of computer models available from SERI that can be used with most utility planning models. The final output of these models gives a financial value ($/kW) of the wind energy system under consideration in the specific utility system. This report, first of two volumes, describes the value determination method and gives detailed discussion on each computer program available from SERI. The second volume is a user's guide for these computer programs.

  7. A New AMR Code for Relativistic Magnetohydrodynamics in Dynamical Specetimes: Numerical Method and Code Validation

    NASA Astrophysics Data System (ADS)

    Liu, Yuk Tung; Etienne, Zachariah; Shapiro, Stuart

    2011-04-01

    The Illinois relativity group has written and tested a new GRMHD code, which is compatible with adaptive-mesh refinement (AMR) provided by the widely-used Cactus/Carpet infrastructure. Our code solves the Einstein-Maxwell-MHD system of coupled equations in full 3+1 dimensions, evolving the metric via the BSSN formalism and the MHD and magnetic induction equations via a conservative, high-resolution shock-capturing scheme. The induction equations are recast as an evolution equation for the magnetic vector potential. The divergenceless constraint div(B) = 0 is enforced by the curl of the vector potential. In simulations with uniform grid spacing, our MHD scheme is numerically equivalent to a commonly used, staggered-mesh constrained-transport scheme. We will present numerical method and code validation tests for both Minkowski and curved spacetimes. The tests include magnetized shocks, nonlinear Alfven waves, cylindrical explosions, cylindrical rotating disks, magnetized Bondi tests, and the collapse of a magnetized rotating star. Some of the more stringent tests involve black holes. We find good agreement between analytic and numerical solutions in these tests, and achieve convergence at the expected order.

  8. Parallel processing a three-dimensional free-lagrange code

    SciTech Connect

    Mandell, D.A.; Trease, H.E. )

    1989-01-01

    A three-dimensional, time-dependent free-Lagrange hydrodynamics code has been multitasked and autotasked on a CRAY X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the CRAY multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The three-dimensional algorithm has presented a number of problems that simpler algorithms, such as those for one-dimensional hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a CRAY-1, to a multitasking code are discussed. Autotasking of a rewritten version of the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given.

  9. Implementation of a Blowing Boundary Condition in the LAURA Code

    NASA Technical Reports Server (NTRS)

    Thompson, Richard a.; Gnoffo, Peter A.

    2008-01-01

    Preliminary steps toward modeling a coupled ablation problem using a finite-volume Navier-Stokes code (LAURA) are presented in this paper. Implementation of a surface boundary condition with mass transfer (blowing) is described followed by verification and validation through comparisons with analytic results and experimental data. Application of the code to a carbon-nosetip ablation problem is demonstrated and the results are compared with previously published data. It is concluded that the code and coupled procedure are suitable to support further ablation analyses and studies.

  10. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  11. A Two-Dimensional Compressible Gas Flow Code

    1995-03-17

    F2D is a general purpose, two dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-correction solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicabilitymore » of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.« less

  12. The Numerical Electromagnetics Code (NEC) - A Brief History

    SciTech Connect

    Burke, G J; Miller, E K; Poggio, A J

    2004-01-20

    The Numerical Electromagnetics Code, NEC as it is commonly known, continues to be one of the more widely used antenna modeling codes in existence. With several versions in use that reflect different levels of capability and availability, there are now 450 copies of NEC4 and 250 copies of NEC3 that have been distributed by Lawrence Livermore National Laboratory to a limited class of qualified recipients, and several hundred copies of NEC2 that had a recorded distribution by LLNL. These numbers do not account for numerous copies (perhaps 1000s) that were acquired through other means capitalizing on the open source code, the absence of distribution controls prior to NEC3 and the availability of versions on the Internet. In this paper we briefly review the history of the code that is concisely displayed in Figure 1. We will show how it capitalized on the research of prominent contributors in the early days of computational electromagnetics, how a combination of events led to the tri-service-supported code development program that ultimately led to NEC and how it evolved to the present day product. The authors apologize that space limitations do not allow us to provide a list of references or to acknowledge the numerous contributors to the code both of which can be found in the code documents.

  13. Testing a neural coding hypothesis using surrogate data.

    PubMed

    Hirata, Yoshito; Katori, Yuichi; Shimokawa, Hidetoshi; Suzuki, Hideyuki; Blenkinsop, Timothy A; Lang, Eric J; Aihara, Kazuyuki

    2008-07-30

    Determining how a particular neuron, or population of neurons, encodes information in their spike trains is not a trivial problem, because multiple coding schemes exist and are not necessarily mutually exclusive. Coding schemes generally fall into one of two broad categories, which we refer to as rate and temporal coding. In rate coding schemes, information is encoded in the variations of the average firing rate of the spike train. In contrast, in temporal coding schemes, information is encoded in the specific timing of the individual spikes that comprise the train. Here, we describe a method for testing the presence of temporal encoding of information. Suppose that a set of original spike trains is given. First, surrogate spike trains are generated by randomizing each of the original spike trains subject to the following constraints: the local average firing rate is approximately preserved, while the overall average firing rate and the distribution of primary interspike intervals are perfectly preserved. These constraints ensure that any rate coding of information present in the original spike trains is preserved in the members of the surrogate population. The null-hypothesis is rejected when additional information is found to be present in the original spike trains, implying that temporal coding is present. The method is validated using artificial data, and then demonstrated using real neuronal data.

  14. SEQassembly: A Practical Tools Program for Coding Sequences Splicing

    NASA Astrophysics Data System (ADS)

    Lee, Hongbin; Yang, Hang; Fu, Lei; Qin, Long; Li, Huili; He, Feng; Wang, Bo; Wu, Xiaoming

    CDS (Coding Sequences) is a portion of mRNA sequences, which are composed by a number of exon sequence segments. The construction of CDS sequence is important for profound genetic analysis such as genotyping. A program in MATLAB environment is presented, which can process batch of samples sequences into code segments under the guide of reference exon models, and splice these code segments of same sample source into CDS according to the exon order in queue file. This program is useful in transcriptional polymorphism detection and gene function study.

  15. ALPAL: A tool to generate simulation codes from natural descriptions

    SciTech Connect

    Cook, G.O. Jr.; Painter, J.F.

    1991-01-01

    ALPAL is a tool that automatically generates code to solve nonlinear integro-differential equations, given a very high-level specification of the equations to be solved and the numerical methods to be used. ALPAL is designed to handle the sort of complicated mathematical models used in very large scientific simulation codes. Other features of ALPAL include an interactive graphical front end, the ability to symbolically compute exact Jacobians for implicit methods, and a high degree of code optimization. 14 refs., 9 figs.

  16. A Combinatorial Geometry Code System with Model Testing Routines.

    1982-10-08

    GIFT, Geometric Information For Targets code system, is used to mathematically describe the geometry of a three-dimensional vehicle such as a tank, truck, or helicopter. The geometric data generated is merged in vulnerability computer codes with the energy effects data of a selected @munition to simulate the probabilities of malfunction or destruction of components when it is attacked by the selected munition. GIFT options include those which graphically display the vehicle, those which check themore » correctness of the geometry data, those which compute physical characteristics of the vehicle, and those which generate the geometry data used by vulnerability codes.« less

  17. ALEPH2 - A general purpose Monte Carlo depletion code

    SciTech Connect

    Stankovskiy, A.; Van Den Eynde, G.; Baeten, P.; Trakas, C.; Demy, P. M.; Villatte, L.

    2012-07-01

    The Monte-Carlo burn-up code ALEPH is being developed at SCK-CEN since 2004. A previous version of the code implemented the coupling between the Monte Carlo transport (any version of MCNP or MCNPX) and the ' deterministic' depletion code ORIGEN-2.2 but had important deficiencies in nuclear data treatment and limitations inherent to ORIGEN-2.2. A new version of the code, ALEPH2, has several unique features making it outstanding among other depletion codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. The last generation general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII and JENDL-4) are fully implemented, including special purpose activation, spontaneous fission, fission product yield and radioactive decay data. The built-in depletion algorithm allows to eliminate the uncertainties associated with obtaining the time-dependent nuclide concentrations. A predictor-corrector mechanism, calculation of nuclear heating, calculation of decay heat, decay neutron sources are available as well. The validation of the code on the results of REBUS experimental program has been performed. The ALEPH2 has shown better agreement with measured data than other depletion codes. (authors)

  18. EMdeCODE: a novel algorithm capable of reading words of epigenetic code to predict enhancers and retroviral integration sites and to identify H3R2me1 as a distinctive mark of coding versus non-coding genes.

    PubMed

    Santoni, Federico Andrea

    2013-02-01

    Existence of some extra-genetic (epigenetic) codes has been postulated since the discovery of the primary genetic code. Evident effects of histone post-translational modifications or DNA methylation over the efficiency and the regulation of DNA processes are supporting this postulation. EMdeCODE is an original algorithm that approximate the genomic distribution of given DNA features (e.g. promoter, enhancer, viral integration) by identifying relevant ChIPSeq profiles of post-translational histone marks or DNA binding proteins and combining them in a supermark. EMdeCODE kernel is essentially a two-step procedure: (i) an expectation-maximization process calculates the mixture of epigenetic factors that maximize the Sensitivity (recall) of the association with the feature under study; (ii) the approximated density is then recursively trimmed with respect to a control dataset to increase the precision by reducing the number of false positives. EMdeCODE densities improve significantly the prediction of enhancer loci and retroviral integration sites with respect to previous methods. Importantly, it can also be used to extract distinctive factors between two arbitrary conditions. Indeed EMdeCODE identifies unexpected epigenetic profiles specific for coding versus non-coding RNA, pointing towards a new role for H3R2me1 in coding regions.

  19. Revisiting the Physico-Chemical Hypothesis of Code Origin: An Analysis Based on Code-Sequence Coevolution in a Finite Population

    NASA Astrophysics Data System (ADS)

    Bandhu, Ashutosh Vishwa; Aggarwal, Neha; Sengupta, Supratim

    2013-12-01

    The origin of the genetic code marked a major transition from a plausible RNA world to the world of DNA and proteins and is an important milestone in our understanding of the origin of life. We examine the efficacy of the physico-chemical hypothesis of code origin by carrying out simulations of code-sequence coevolution in finite populations in stages, leading first to the emergence of ten amino acid code(s) and subsequently to 14 amino acid code(s). We explore two different scenarios of primordial code evolution. In one scenario, competition occurs between populations of equilibrated code-sequence sets while in another scenario; new codes compete with existing codes as they are gradually introduced into the population with a finite probability. In either case, we find that natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. The code whose structure is most consistent with the standard genetic code is often not among the codes that have a high fixation probability. However, we find that the composition of the code population affects the code fixation probability. A physico-chemically optimized code gets fixed with a significantly higher probability if it competes against a set of randomly generated codes. Our results suggest that physico-chemical optimization may not be the sole driving force in ensuring the emergence of the standard genetic code.

  20. Shot level parallelization of a seismic inversion code using PVM

    SciTech Connect

    Versteeg, R.J.; Gockenback, M.; Symes, W.W.; Kern, M.

    1994-12-31

    This paper presents experience with parallelization using PVM of DSO, a seismic inversion code developed in The Rice Inversion Project. It focuses on one aspect: trying to run efficiently on a cluster of 4 workstations. The authors use a coarse grain parallelism in which they dynamically distribute the shots over the available machines in the cluster. The modeling and migration of their code is parallelized very effectively by this strategy; they have reached a overall performance of 104 Mflops using a configuration of one manager with 3 workers, a speedup of 2.4 versus the serial version, which according to Amdahl`s law is optimal given the current design of their code. Further speedup is currently limited by the non parallelized part of their code optimization, linear algebra and i(o).

  1. A program evaluation of classroom data collection with bar codes.

    PubMed

    Saunders, M D; Saunders, J L; Saunders, R R

    1993-01-01

    A technology incorporating bar code symbols and hand-held optical scanners was evaluated for its utility for routine data collection in a special education classroom. A different bar code symbol was created for each Individualized Educational Plan objective, each type of response occurrence, and each student in the first author's classroom. These symbols were organized by activity and printed as data sheets. The teacher and paraprofessionals scanned relevant codes with scanners when the students emitted targeted behaviors. The codes, dates, and approximate times of the scans were retained in the scanner's electronic memory until they could be transferred by communication software to a computer file. The data from the computer file were organized weekly into a printed report of student performance using a program written with commercially available database software. Advantages, disadvantages, and costs of using the system are discussed. PMID:8469795

  2. Towards Realistic Implementations of a Majorana Surface Code.

    PubMed

    Landau, L A; Plugge, S; Sela, E; Altland, A; Albrecht, S M; Egger, R

    2016-02-01

    Surface codes have emerged as promising candidates for quantum information processing. Building on the previous idea to realize the physical qubits of such systems in terms of Majorana bound states supported by topological semiconductor nanowires, we show that the basic code operations, namely projective stabilizer measurements and qubit manipulations, can be implemented by conventional tunnel conductance probes and charge pumping via single-electron transistors, respectively. The simplicity of the access scheme suggests that a functional code might be in close experimental reach. PMID:26894694

  3. A three-dimensional magnetostatics computer code for insertion devices.

    PubMed

    Chubar, O; Elleaume, P; Chavanne, J

    1998-05-01

    RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.

  4. A benchmark for galactic cosmic ray transport codes

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.

    1987-01-01

    A nontrivial analytic benchmark solution for galactic cosmic ray transport is presented for use in transport code validation. Computational accuracy for a previously-developed cosmic ray transport code is established to within one percent by comparison with this exact benchmark. Hence, solution accuracy for the transport problem is mainly limited by inaccuracies in the input spectra, input interaction databases, and the use of a straight ahead/velocity-conserving approximation.

  5. Progress towards a world-wide code of conduct

    SciTech Connect

    Lee, J.A.N.; Berleur, J.

    1994-12-31

    In this paper the work of the International Federation for Information Processing (IFIP) Task Group on Ethics is described and the recommendations presented to the General Assembly are reviewed. While a common code of ethics or conduct has been not recommended for consideration by the member societies of IMP, a set of guidelines for the establishment and evaluation of codes has been produced and procedures for the assistance of code development have been established within IMP. This paper proposes that the data collected by the Task Group and the proposed guidelines can be used as a tool for the study of codes of practice providing a teachable, learnable educational module in courses related to the ethics of computing and computation, and looks at the next steps in bringing ethical awareness to the IT community.

  6. POPCORN: A comparison of binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  7. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  8. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, i.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  9. A finite element code for electric motor design

    NASA Technical Reports Server (NTRS)

    Campbell, C. Warren

    1994-01-01

    FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.

  10. Passive stabilization in a linear MHD stability code

    SciTech Connect

    Todd, A.M.M.

    1980-03-01

    Utilizing a Galerkin procedure to calculate the vacuum contribution to the ideal MHD Lagrangian, the implementation of realistic boundary conditions are described in a linear stability code. The procedure permits calculation of the effect of arbitrary conducting structure on ideal MHD instabilities, as opposed to the prior use of an encircling shell. The passive stabilization of conducting coils on the tokamak vertical instability is calculated within the PEST code and gives excellent agreement with 2-D time dependent simulations of PDX.

  11. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  12. A parallel and modular deformable cell Car-Parrinello code

    NASA Astrophysics Data System (ADS)

    Cavazzoni, Carlo; Chiarotti, Guido L.

    1999-12-01

    We have developed a modular parallel code implementing the Car-Parrinello [Phys. Rev. Lett. 55 (1985) 2471] algorithm including the variable cell dynamics [Europhys. Lett. 36 (1994) 345; J. Phys. Chem. Solids 56 (1995) 510]. Our code is written in Fortran 90, and makes use of some new programming concepts like encapsulation, data abstraction and data hiding. The code has a multi-layer hierarchical structure with tree like dependences among modules. The modules include not only the variables but also the methods acting on them, in an object oriented fashion. The modular structure allows easier code maintenance, develop and debugging procedures, and is suitable for a developer team. The layer structure permits high portability. The code displays an almost linear speed-up in a wide range of number of processors independently of the architecture. Super-linear speed up is obtained with a "smart" Fast Fourier Transform (FFT) that uses the available memory on the single node (increasing for a fixed problem with the number of processing elements) as temporary buffer to store wave function transforms. This code has been used to simulate water and ammonia at giant planet conditions for systems as large as 64 molecules for ˜50 ps.

  13. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  14. ADLIB—A simple database framework for beamline codes

    NASA Astrophysics Data System (ADS)

    Mottershead, C. Thomas

    1993-12-01

    There are many well developed codes available for beamline design and analysis. A significant fraction of each of these codes is devoted to processing its own unique input language for describing the problem. None of these large, complex, and powerful codes does everything. Adding a new bit of specialized physics can be a difficult task whose successful completion makes the code even larger and more complex. This paper describes an attempt to move in the opposite direction, toward a family of small, simple, single purpose physics and utility modules, linked by an open, portable, public domain database framework. These small specialized physics codes begin with the beamline parameters already loaded in the database, and accessible via the handful of subroutines that constitute ADLIB. Such codes are easier to write, and inherently organized in a manner suitable for incorporation in model based control system algorithms. Examples include programs for analyzing beamline misalignment sensitivities, for simulating and fitting beam steering data, and for translating among MARYLIE, TRANSPORT, and TRACE3D formats.

  15. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  16. ICD-10 mortality coding and the NCIS: a comparative study.

    PubMed

    Daking, Leanne; Dodds, Leonie

    2007-01-01

    The collection and utilisation of mortality data are often hindered by limited access to contextual details of the circumstances surrounding fatal incidents. The National Coroners Information System (NCIS) can provide researchers with access to such information. The NCIS search capabilities have been enhanced by the inclusion of data supplied by the Australian Bureau of Statistics (ABS), specifically the ICD-10 Cause of Death code set. A comparative study was conducted to identify consistencies and differences between ABS ICD-10 codes and those that could be generated by utilising the full NCIS record. Discrepancies between the two sets of codes were detected in over 50% of cases, which highlighted the importance of access to complete and timely documentation in the assignment of accurate and detailed cause of death codes. PMID:18195402

  17. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1993-01-01

    Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.

  18. A Comprehensive Validation Approach Using The RAVEN Code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J; Rinaldi, Ivan; Giannetti, Fabio; Caruso, Gianfranco

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics needed to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.

  19. Radiation transport phenomena and modeling - part A: Codes

    SciTech Connect

    Lorence, L.J.

    1997-06-01

    The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped.

  20. FLASH: A finite element computer code for variably saturated flow

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.

  1. HADES, A Code for Simulating a Variety of Radiographic Techniques

    SciTech Connect

    Aufderheide, M B; Henderson, G; von Wittenau, A; Slone, D M; Barty, A; Martz, Jr., H E

    2004-10-28

    It is often useful to simulate radiographic images in order to optimize imaging trade-offs and to test tomographic techniques. HADES is a code that simulates radiography using ray tracing techniques. Although originally developed to simulate X-Ray transmission radiography, HADES has grown to simulate neutron radiography over a wide range of energy, proton radiography in the 1 MeV to 100 GeV range, and recently phase contrast radiography using X-Rays in the keV energy range. HADES can simulate parallel-ray or cone-beam radiography through a variety of mesh types, as well as through collections of geometric objects. HADES was originally developed for nondestructive evaluation (NDE) applications, but could be a useful tool for simulation of portal imaging, proton therapy imaging, and synchrotron studies of tissue. In this paper we describe HADES' current capabilities and discuss plans for a major revision of the code.

  2. Programming a real code in a functional language (part 1)

    SciTech Connect

    Hendrickson, C.P.

    1991-09-10

    For some, functional languages hold the promise of allowing ease of programming massively parallel computers that imperative languages such as Fortran and C do not offer. At LLNL, we have initiated a project to write the physics of a major production code in Sisal, a functional language developed at LLNL in collaboration with researchers throughout the world. We are investigating the expressibility of Sisal, as well as its performance on a shared-memory multiprocessor, the Y-MP. An interesting aspect of the project is that Sisal modules can call Fortran modules, and are callable by them. This eliminates the rewriting of 80% of the production code that would not benefit from parallel execution. Preliminary results indicate that the restrictive nature of the language does not cause problems in expressing the algorithms we have chosen. Some interesting aspects of programming in a mixed functional-imperative environment have surfaced, but can be managed. 8 refs.

  3. A systems neurophysiology approach to voluntary event coding.

    PubMed

    Petruo, Vanessa A; Stock, Ann-Kathrin; Münchau, Alexander; Beste, Christian

    2016-07-15

    Mechanisms responsible for the integration of perceptual events and appropriate actions (sensorimotor processes) have been subject to intense research. Different theoretical frameworks have been put forward with the "Theory of Event Coding (TEC)" being one of the most influential. In the current study, we focus on the concept of 'event files' within TEC and examine what sub-processes being dissociable by means of cognitive-neurophysiological methods are involved in voluntary event coding. This was combined with EEG source localization. We also introduce reward manipulations to delineate the neurophysiological sub-processes most relevant for performance variations during event coding. The results show that processes involved in voluntary event coding included predominantly stimulus categorization, feature unbinding and response selection, which were reflected by distinct neurophysiological processes (the P1, N2 and P3 ERPs). On a system's neurophysiological level, voluntary event-file coding is thus related to widely distributed parietal-medial frontal networks. Attentional selection processes (N1 ERP) turned out to be less important. Reward modulated stimulus categorization in parietal regions likely reflecting aspects of perceptual decision making but not in other processes. The perceptual categorization stage appears central for voluntary event-file coding. PMID:27153981

  4. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  5. Development of a fusion fuel cycle systems code

    SciTech Connect

    Brereton, S.J.

    1991-12-31

    The tritium inventory in a D-T fusion experiment, like ITER, may be the major hazard onsite. This tritium is distributed throughout various systems and components. A major thrust of safety work has been aimed at reducing these tritium inventories, or at least at minimizing the amount of tritium that could be mobilized. I have developed models for a time-dependent fuel cycle systems code, which will aid in directing designers towards safer, lower inventory designs. The code will provide a self-consistent picture of system interactions and system interdependencies, and provide a better understanding of how tritium inventories are influenced. A ``systems`` approach is valuable in that a wide range of parameters can be studied, and more promising regions of parameter space can be identified. Ultimately, designers can use this information to specify a machine with minimum tritium inventory, given various constraints. Here, I discuss the models that describe tritium inventory in various components as a function of system parameters, and the unique capabilities of a code that will implement them. The models are time dependent and reflect a level of detail consistent with a systems type of analysis. The models support both a stand-alone Tritium Systems Code, and a module for the SUPERCODE, a time-dependent tokamak systems code. Through both versions, we should gain a better understanding of the interactions among the various components of the fuel cycle systems.

  6. Development of a fusion fuel cycle systems code

    SciTech Connect

    Brereton, S.J. )

    1991-01-01

    The tritium inventory in a D-T fusion experiment, like ITER, may be the major hazard onsite. This tritium is distributed throughout various systems and components. A major thrust of safety work has been aimed at reducing these tritium inventories, or at least at minimizing the amount of tritium that could be mobilized. I have developed models for a time-dependent fuel cycle systems code, which will aid in directing designers towards safer, lower inventory designs. The code will provide a self-consistent picture of system interactions and system interdependencies, and provide a better understanding of how tritium inventories are influenced. A systems'' approach is valuable in that a wide range of parameters can be studied, and more promising regions of parameter space can be identified. Ultimately, designers can use this information to specify a machine with minimum tritium inventory, given various constraints. Here, I discuss the models that describe tritium inventory in various components as a function of system parameters, and the unique capabilities of a code that will implement them. The models are time dependent and reflect a level of detail consistent with a systems type of analysis. The models support both a stand-alone Tritium Systems Code, and a module for the SUPERCODE, a time-dependent tokamak systems code. Through both versions, we should gain a better understanding of the interactions among the various components of the fuel cycle systems.

  7. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  8. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than...

  9. Conjugate heat transfer study of a wire spacer SFR fuel assembly thanks to the thermal code SYRTHES and the CFD code Code_Saturne

    NASA Astrophysics Data System (ADS)

    Péniguel, C.; Rupp, I.; Rolfo, S.; Hermouet, D.

    2014-06-01

    The paper presents a HPC calculation of a conjugate heat transfer simulation in fuel assembly as those found in liquid metal coolant fast reactors. The wire spacers, helically wound along each pin axis, generate a strong secondary flow pattern in opposition to smooth pins. Assemblies with a range of pins going from 7 to 271 have been simulated, 271 pins corresponding to the industrial case. Both the fluid domain, as well as the solid part, are detailed leading to large meshes. The fluid is handled by the CFD code Code_Saturne using 98 million cells, while the solid domain is taken care of thanks to the thermal code SYRTHES on meshes up to 240 million cells. Both codes are fully parallel and run on cluster with hundreds of processors. Simulations allow access to the temperature field in nominal conditions and degraded situations.

  10. A parallel TreeSPH code for galaxy formation

    NASA Astrophysics Data System (ADS)

    Lia, Cesario; Carraro, Giovanni

    2000-05-01

    We describe a new implementation of a parallel TreeSPH code with the aim of simulating galaxy formation and evolution. The code has been parallelized using shmem, a Cray proprietary library to handle communications between the 256 processors of the Silicon Graphics T3E massively parallel supercomputer hosted by the Cineca Super-computing Center (Bologna, Italy).1 The code combines the smoothed particle hydrodynamics (SPH) method for solving hydrodynamical equations with the popular Barnes & Hut tree-code to perform gravity calculation with an N×logN scaling, and it is based on the scalar TreeSPH code developed by Carraro et al. Parallelization is achieved by distributing particles along processors according to a workload criterion. Benchmarks, in terms of load balance and scalability, of the code are analysed and critically discussed against the adiabatic collapse of an isothermal gas sphere test using 2×104 particles on 8 processors. The code results balance at more than the 95per cent level. Increasing the number of processors, the load balance slightly worsens. The deviation from perfect scalability for increasing number of processors is almost negligible up to 32 processors. Finally, we present a simulation of the formation of an X-ray galaxy cluster in a flat cold dark matter cosmology, using 2×105 particles and 32 processors, and compare our results with Evrard's P3M-SPH simulations. Additionally we have incorporated radiative cooling, star formation, feedback from SNe of types II and Ia, stellar winds and UV flux from massive stars, and an algorithm to follow the chemical enrichment of the interstellar medium. Simulations with some of these ingredients are also presented.

  11. LUDWIG: A parallel Lattice-Boltzmann code for complex fluids

    NASA Astrophysics Data System (ADS)

    Desplat, Jean-Christophe; Pagonabarraga, Ignacio; Bladon, Peter

    2001-03-01

    This paper describes Ludwig, a versatile code for the simulation of Lattice-Boltzmann (LB) models in 3D on cubic lattices. In fact, Ludwig is not a single code, but a set of codes that share certain common routines, such as I/O and communications. If Ludwig is used as intended, a variety of complex fluid models with different equilibrium free energies are simple to code, so that the user may concentrate on the physics of the problem, rather than on parallel computing issues. Thus far, Ludwig's main application has been to symmetric binary fluid mixtures. We first explain the philosophy and structure of Ludwig which is argued to be a very effective way of developing large codes for academic consortia. Next we elaborate on some parallel implementation issues such as parallel I/O, and the use of MPI to achieve full portability and good efficiency on both MPP and SMP systems. Finally, we describe how to implement generic solid boundaries, and look in detail at the particular case of a symmetric binary fluid mixture near a solid wall. We present a novel scheme for the thermodynamically consistent simulation of wetting phenomena, in the presence of static and moving solid boundaries, and check its performance.

  12. A colorful origin for the genetic code: information theory, statistical mechanics and the emergence of molecular codes.

    PubMed

    Tlusty, Tsvi

    2010-09-01

    The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. While the biochemical details of this code were unraveled long ago, its origin is still obscure. We review information-theoretic approaches to the problem of the code's origin and discuss the results of a recent work that treats the code in terms of an evolving, error-prone information channel. Our model - which utilizes the rate-distortion theory of noisy communication channels - suggests that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino-acids, for error-tolerance and for minimal cost of resources. The description of the code as an information channel allows us to mathematically identify the fitness of the code and locate its emergence at a second-order phase transition when the mapping of codons to amino-acids becomes nonrandom. The noise in the channel brings about an error-graph, in which edges connect codons that are likely to be confused. The emergence of the code is governed by the topology of the error-graph, which determines the lowest modes of the graph-Laplacian and is related to the map coloring problem. PMID:20558115

  13. A colorful origin for the genetic code: Information theory, statistical mechanics and the emergence of molecular codes

    NASA Astrophysics Data System (ADS)

    Tlusty, Tsvi

    2010-09-01

    The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. While the biochemical details of this code were unraveled long ago, its origin is still obscure. We review information-theoretic approaches to the problem of the code's origin and discuss the results of a recent work that treats the code in terms of an evolving, error-prone information channel. Our model - which utilizes the rate-distortion theory of noisy communication channels - suggests that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino-acids, for error-tolerance and for minimal cost of resources. The description of the code as an information channel allows us to mathematically identify the fitness of the code and locate its emergence at a second-order phase transition when the mapping of codons to amino-acids becomes nonrandom. The noise in the channel brings about an error-graph, in which edges connect codons that are likely to be confused. The emergence of the code is governed by the topology of the error-graph, which determines the lowest modes of the graph-Laplacian and is related to the map coloring problem.

  14. Experimental qualification of a code for optimizing gamma irradiation facilities

    NASA Astrophysics Data System (ADS)

    Mosse, D. C.; Leizier, J. J. M.; Keraron, Y.; Lallemant, T. F.; Perdriau, P. D. M.

    Dose computation codes are a prerequisite for the design of gamma irradiation facilities. Code quality is a basic factor in the achievement of sound economic and technical performance by the facility. This paper covers the validation of a code by reference dosimetry experiments. Developed by the "Société Générale pour les Techniques Nouvelles" (SGN), a supplier of irradiation facilities and member of the CEA Group, the code is currently used by that company. (ERHART, KERARON, 1986) Experimental data were obtained under conditions representative of those prevailing in the gamma irradiation of foodstuffs. Irradiation was performed in POSEIDON, a Cobalt 60 cell of ORIS-I. Several Cobalt 60 rods of known activity are arranged in a planar array typical of industrial irradiation facilities. Pallet density is uniform, ranging from 0 (air) to 0.6. Reference dosimetry measurements were performed by the "Laboratoire de Métrologie des Rayonnements Ionisants" (LMRI) of the "Bureau National de Métrologie" (BNM). The procedure is based on the positioning of more than 300 ESR/alanine dosemeters throughout the various target volumes used. The reference quantity was the absorbed dose in water. The code was validated by a comparison of experimental and computed data. It has proved to be an effective tool for the design of facilities meeting the specific requirements applicable to foodstuff irradiation, which are frequently found difficult to meet.

  15. A DOE Computer Code Toolbox: Issues and Opportunities

    SciTech Connect

    Vincent, A.M. III

    2001-06-12

    The initial activities of a Department of Energy (DOE) Safety Analysis Software Group to establish a Safety Analysis Toolbox of computer models are discussed. The toolbox shall be a DOE Complex repository of verified and validated computer models that are configuration-controlled and made available for specific accident analysis applications. The toolbox concept was recommended by the Defense Nuclear Facilities Safety Board staff as a mechanism to partially address Software Quality Assurance issues. Toolbox candidate codes have been identified through review of a DOE Survey of Software practices and processes, and through consideration of earlier findings of the Accident Phenomenology and Consequence Evaluation program sponsored by the DOE National Nuclear Security Agency/Office of Defense Programs. Planning is described to collect these high-use codes, apply tailored SQA specific to the individual codes, and implement the software toolbox concept. While issues exist such as resource allocation and the interface among code developers, code users, and toolbox maintainers, significant benefits can be achieved through a centralized toolbox and subsequent standardized applications.

  16. A Data Parallel Multizone Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1995-01-01

    We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.

  17. Parallel Processing of a Groundwater Contaminant Code

    SciTech Connect

    Arnett, Ronald Chester; Greenwade, Lance Eric

    2000-05-01

    The U. S. Department of Energy’s Idaho National Engineering and Environmental Laboratory (INEEL) is conducting a field test of experimental enhanced bioremediation of trichoroethylene (TCE) contaminated groundwater. TCE is a chlorinated organic substance that was used as a solvent in the early years of the INEEL and disposed in some cases to the aquifer. There is an effort underway to enhance the natural bioremediation of TCE by adding a non-toxic substance that serves as a feed material for the bacteria that can biologically degrade the TCE.

  18. Multisynaptic activity in a pyramidal neuron model and neural code.

    PubMed

    Ventriglia, Francesco; Di Maio, Vito

    2006-01-01

    The highly irregular firing of mammalian cortical pyramidal neurons is one of the most striking observation of the brain activity. This result affects greatly the discussion on the neural code, i.e. how the brain codes information transmitted along the different cortical stages. In fact it seems to be in favor of one of the two main hypotheses about this issue, named the rate code. But the supporters of the contrasting hypothesis, the temporal code, consider this evidence inconclusive. We discuss here a leaky integrate-and-fire model of a hippocampal pyramidal neuron intended to be biologically sound to investigate the genesis of the irregular pyramidal firing and to give useful information about the coding problem. To this aim, the complete set of excitatory and inhibitory synapses impinging on such a neuron has been taken into account. The firing activity of the neuron model has been studied by computer simulation both in basic conditions and allowing brief periods of over-stimulation in specific regions of its synaptic constellation. Our results show neuronal firing conditions similar to those observed in experimental investigations on pyramidal cortical neurons. In particular, the variation coefficient (CV) computed from the inter-spike intervals (ISIs) in our simulations for basic conditions is close to the unity as that computed from experimental data. Our simulation shows also different behaviors in firing sequences for different frequencies of stimulation. PMID:16870323

  19. Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.

    2000-01-01

    The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.

  20. A combinatorial code for pattern formation in Drosophila oogenesis

    PubMed Central

    Yakoby, N.; Bristow, C.A.; Gong, D.; Schafer, X.; Lembong, J.; Zartman, J.J.; Halfon, M.S.; Schüpbach, T.; Shvartsman, S.Y.

    2010-01-01

    Summary Two-dimensional patterning of the follicular epithelium in Drosophila oogenesis is required for the formation of three-dimensional eggshell structures. Our analysis of a large number of published gene expression patterns in the follicle cells suggested that they follow a simple combinatorial code, based on six spatial building blocks and the operations of union, difference, intersection, and addition. The building blocks are related to the distribution of the inductive signals, provided by the highly conserved EGFR and DPP pathways. We demonstrated the validity of the code by testing it against a set of newly identified expression patterns, obtained in a large-scale transcriptional profiling experiment. Using the proposed code, we distinguished 36 distinct patterns for 81 genes expressed in the follicular epithelium and characterized their joint dynamics over four stages of oogenesis. This work provides the first systematic analysis of the diversity and dynamics of two-dimensional gene expression patterns in a developing tissue. PMID:19000837

  1. A Coach's Code of Conduct. Position Statement

    ERIC Educational Resources Information Center

    Lyman, Linda; Ewing, Marty; Martino, Nan

    2009-01-01

    Coaches exert a profound impact on our youths; therefore, society sets high expectations for them. As such, whether coaches are compensated or work solely as volunteers, they are responsible for executing coaching as a professional. If we are to continue to enhance the cultural perceptions of coaching, we must strive to develop and master the…

  2. Imaging The Genetic Code of a Virus

    NASA Astrophysics Data System (ADS)

    Graham, Jenna; Link, Justin

    2013-03-01

    Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University

  3. A neural coding scheme reproducing foraging trajectories

    PubMed Central

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-01-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series. PMID:26648311

  4. A neural coding scheme reproducing foraging trajectories

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-12-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series.

  5. A neural coding scheme reproducing foraging trajectories.

    PubMed

    Gutiérrez, Esther D; Cabrera, Juan Luis

    2015-12-09

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series.

  6. Incorporation of Condensation Heat Transfer in a Flow Network Code

    NASA Technical Reports Server (NTRS)

    Anthony, Miranda; Majumdar, Alok; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    In this paper we have investigated the condensation of water vapor in a short tube. A numerical model of condensation heat transfer was incorporated in a flow network code. The flow network code that we have used in this paper is Generalized Fluid System Simulation Program (GFSSP). GFSSP is a finite volume based flow network code. Four different condensation models were presented in the paper. Soliman's correlation has been found to be the most stable in low flow rates which is of particular interest in this application. Another highlight of this investigation is conjugate or coupled heat transfer between solid or fluid. This work was done in support of NASA's International Space Station program.

  7. SCAMPI: A code package for cross-section processing

    SciTech Connect

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  8. A sweet code for glycoprotein folding.

    PubMed

    Caramelo, Julio J; Parodi, Armando J

    2015-11-14

    Glycoprotein synthesis is initiated in the endoplasmic reticulum (ER) lumen upon transfer of a glycan (Glc3Man9GlcNAc2) from a lipid derivative to Asn residues (N-glycosylation). N-Glycan-dependent quality control of glycoprotein folding in the ER prevents exit to Golgi of folding intermediates, irreparably misfolded glycoproteins and incompletely assembled multimeric complexes. It also enhances folding efficiency by preventing aggregation and facilitating formation of proper disulfide bonds. The control mechanism essentially involves four components, resident lectin-chaperones (calnexin and calreticulin) that recognize monoglucosylated polymannose protein-linked glycans, lectin-associated oxidoreductase acting on monoglucosylated glycoproteins (ERp57), a glucosyltransferase that creates monoglucosylated epitopes in protein-linked glycans (UGGT) and a glucosidase (GII) that removes the glucose units added by UGGT. This last enzyme is the only mechanism component sensing glycoprotein conformations as it creates monoglucosylated glycans exclusively in not properly folded glycoproteins or in not completely assembled multimeric glycoprotein complexes. Glycoproteins that fail to properly fold are eventually driven to proteasomal degradation in the cytosol following the ER-associated degradation pathway, in which the extent of N-glycan demannosylation by ER mannosidases play a relevant role in the identification of irreparably misfolded glycoproteins.

  9. Requirements for a multifunctional code architecture

    SciTech Connect

    Tiihonen, O.; Juslin, K.

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed.

  10. NASTRAN as a resource in code development

    NASA Technical Reports Server (NTRS)

    Stanton, E. L.; Crain, L. M.; Neu, T. F.

    1975-01-01

    A case history is presented in which the NASTRAN system provided both guidelines and working software for use in the development of a discrete element program, PATCHES-111. To avoid duplication and to take advantage of the wide spread user familiarity with NASTRAN, the PATCHES-111 system uses NASTRAN bulk data syntax, NASTRAN matrix utilities, and the NASTRAN linkage editor. Problems in developing the program are discussed along with details on the architecture of the PATCHES-111 parametric cubic modeling system. The system includes model construction procedures, checkpoint/restart strategies, and other features.

  11. A Method for Automated Program Code Testing

    ERIC Educational Resources Information Center

    Drasutis, Sigitas; Motekaityte, Vida; Noreika, Algirdas

    2010-01-01

    The Internet has recently encouraged the society to convert almost all its needs to electronic resources such as e-libraries, e-cultures, e-entertainment as well as e-learning, which has become a radical idea to increase the effectiveness of learning services in most schools, colleges and universities. E-learning can not be completely featured and…

  12. LOOPREF: A Fluid Code for the Simulation of Coronal Loops

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda; Antiochos, Spiro; Spicer, Daniel

    1998-01-01

    This report documents the code LOOPREF. LOOPREF is a semi-one dimensional finite element code that is especially well suited to simulate coronal-loop phenomena. It has a full implementation of adaptive mesh refinement (AMR), which is crucial for this type of simulation. The AMR routines are an improved version of AMR1D. LOOPREF's versatility makes is suitable to simulate a wide variety of problems. In addition to efficiently providing very high resolution in rapidly changing regions of the domain, it is equipped to treat loops of variable cross section, any non-linear form of heat conduction, shocks, gravitational effects, and radiative loss.

  13. [Space coding: a Nobel prize diary].

    PubMed

    Rondi-Reig, Laure

    2015-02-01

    The Nobel Prize in Medecine or Physiology for 2014 has been awarded to three neuroscientists: John O'Keefe, May-Britt Moser and Edvard Moser for "their discoveries of cells that constitute a positioning system in the brain". This rewards innovative ideas which led to the development of intracerebral recording techniques in freely moving animals, thus providing links between behavior and physiology. This prize highlights how neural activity sustains our ability to localize ourselves and move around in the environment. This research provides key insights on how the brain drives behavior. PMID:25744268

  14. A HYDROCHEMICAL HYBRID CODE FOR ASTROPHYSICAL PROBLEMS. I. CODE VERIFICATION AND BENCHMARKS FOR A PHOTON-DOMINATED REGION (PDR)

    SciTech Connect

    Motoyama, Kazutaka; Morata, Oscar; Hasegawa, Tatsuhiko; Shang, Hsien; Krasnopolsky, Ruben

    2015-07-20

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H{sub 2} are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  15. A Spectral Verification of the HELIOS-2 Lattice Physics Code

    SciTech Connect

    D. S. Crawford; B. D. Ganapol; D. W. Nigg

    2012-11-01

    Core modeling of the Advanced Test Reactor (ATR) at INL is currently undergoing a significant update through the Core Modeling Update Project1. The intent of the project is to bring ATR core modeling in line with today’s standard of computational efficiency and verification and validation practices. The HELIOS-2 lattice physics code2 is the lead code of several reactor physics codes to be dedicated to modernize ATR core analysis. This presentation is concerned with an independent verification of the HELIOS-2 spectral representation including the slowing down and thermalization algorithm and its data dependency. Here, we will describe and demonstrate a recently developed simple cross section generation algorithm based entirely on analytical multigroup parameters for both the slowing down and thermal spectrum. The new capability features fine group detail to assess the flux and multiplication factor dependencies on cross section data sets using the fundamental infinite medium as an example.

  16. Towards a 3D Space Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathl, R. K.; Cicomptta, F. A.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    High-speed computational procedures for space radiation shielding have relied on asymptotic expansions in terms of the off-axis scatter and replacement of the general geometry problem by a collection of flat plates. This type of solution was derived for application to human rated systems in which the radius of the shielded volume is large compared to the off-axis diffusion limiting leakage at lateral boundaries. Over the decades these computational codes are relatively complete and lateral diffusion effects are now being added. The analysis for developing a practical full 3D space shielding code is presented.

  17. A Post-Monte-Carlo Sensitivity Analysis Code

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  18. A fixed delay speech coder with variable length binary coding

    NASA Astrophysics Data System (ADS)

    Haoui, A.; Messerschmitt, D. G.

    The average bit rate of a vector quantizer (VQ) can be substantially decreased with variable-length (VL) binary coding with memory. To avoid the variable delay and buffer overflow problems associated with VL coding it is proposed to use an embedded coder in conjunction with the VL coder. Simulation results and listening tests indicate that the proposed fixed delay coder produces better quality speech at 16 Kbits/sec than ADPCM with a fixed second order predictor at 24 Kbits/sec and is less speaker dependent than standard VQ coders.

  19. GRADSPMHD: A parallel MHD code based on the SPH formalism

    NASA Astrophysics Data System (ADS)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a

  20. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  1. FLY: a Tree Code for Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.

    FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.

  2. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  3. ELEFANT: a user-friendly multipurpose geodynamics code

    NASA Astrophysics Data System (ADS)

    Thieulot, C.

    2014-07-01

    A new finite element code for the solution of the Stokes and heat transport equations is presented. It has purposely been designed to address geological flow problems in two and three dimensions at crustal and lithospheric scales. The code relies on the Marker-in-Cell technique and Lagrangian markers are used to track materials in the simulation domain which allows recording of the integrated history of deformation; their (number) density is variable and dynamically adapted. A variety of rheologies has been implemented including nonlinear thermally activated dislocation and diffusion creep and brittle (or plastic) frictional models. The code is built on the Arbitrary Lagrangian Eulerian kinematic description: the computational grid deforms vertically and allows for a true free surface while the computational domain remains of constant width in the horizontal direction. The solution to the large system of algebraic equations resulting from the finite element discretisation and linearisation of the set of coupled partial differential equations to be solved is obtained by means of the efficient parallel direct solver MUMPS whose performance is thoroughly tested, or by means of the WISMP and AGMG iterative solvers. The code accuracy is assessed by means of many geodynamically relevant benchmark experiments which highlight specific features or algorithms, e.g., the implementation of the free surface stabilisation algorithm, the (visco-)plastic rheology implementation, the temperature advection, the capacity of the code to handle large viscosity contrasts. A two-dimensional application to salt tectonics presented as case study illustrates the potential of the code to model large scale high resolution thermo-mechanically coupled free surface flows.

  4. Validation of a comprehensive space radiation transport code.

    PubMed

    Shinn, J L; Cucinotta, F A; Simonsen, L C; Wilson, J W; Badavi, F F; Badhwar, G D; Miller, J; Zeitlin, C; Heilbronn, L; Tripathi, R K; Clowdsley, M S; Heinbockel, J H; Xapsos, M A

    1998-12-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation.

  5. A Radiation Shielding Code for Spacecraft and Its Validation

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.

    2000-01-01

    The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.

  6. VIPRE-01: A thermal-hydraulic code for reactor cores:

    SciTech Connect

    Stewart, C.W.; Cuta, J.M.

    1988-03-01

    VIPRE (Versatile Internals and Component Program for Reactors;EPRI) has been developed for nuclear power utility thermal-hydraulic analysis applications. It is designed to help evaluate nuclear reactor core safety limits including minimum departure from nucleate boiling ratio (NDNBR), critical power ratio (CPR), fuel and clad temperatures, and coolant state in normal operation and assumed accident conditions. This volume discusses general and specific considerations in using VIPRE as a thermal-hydraulic analysis tool. Volume 1: Mathematical Modeling, explains the major thermal-hydraulic models and supporting mathematial correlations in detail. Volume 2: Users's Manual, describes the input requirements of the codes in the VIPRE code package. Volume 3: Programmer's Manual, explains the code structure and computer interface. Experimence in running VIPRE is documented in Volume 4: Applications. 25 refs., 31 figs., 7 tabs.

  7. Breaking the Genetic Code in a Letter by Max Delbruck.

    ERIC Educational Resources Information Center

    Fox, Marty

    1996-01-01

    Describes a classroom exercise that uses a letter from Max Delbruck to George Beadle to stimulate interest in the mechanics of a nonoverlapping comma-free code. Enables students to participate in the rich history of molecular biology and illustrates to them that scientists and science can be fun. (JRH)

  8. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Codes and standards. 50.55a Section 50.55a Energy NUCLEAR... specified in § 50.55, except that each combined license for a boiling or pressurized water-cooled nuclear... boiling or pressurized water-cooled nuclear power facility is subject to the conditions in paragraphs...

  9. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Codes and standards. 50.55a Section 50.55a Energy NUCLEAR... specified in § 50.55, except that each combined license for a boiling or pressurized water-cooled nuclear... boiling or pressurized water-cooled nuclear power facility is subject to the conditions in paragraphs...

  10. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Codes and standards. 50.55a Section 50.55a Energy NUCLEAR... specified in § 50.55, except that each combined license for a boiling or pressurized water-cooled nuclear... boiling or pressurized water-cooled nuclear power facility is subject to the conditions in paragraphs...

  11. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Codes and standards. 50.55a Section 50.55a Energy NUCLEAR... specified in § 50.55, except that each combined license for a boiling or pressurized water-cooled nuclear... boiling or pressurized water-cooled nuclear power facility is subject to the conditions in paragraphs...

  12. 10 CFR 50.55a - Codes and standards.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Codes and standards. 50.55a Section 50.55a Energy NUCLEAR... specified in § 50.55, except that each combined license for a boiling or pressurized water-cooled nuclear... boiling or pressurized water-cooled nuclear power facility is subject to the conditions in paragraphs...

  13. A Learning Environment for English Vocabulary Using Quick Response Codes

    ERIC Educational Resources Information Center

    Arikan, Yuksel Deniz; Ozen, Sevil Orhan

    2015-01-01

    This study focuses on the process of developing a learning environment that uses tablets and Quick Response (QR) codes to enhance participants' English language vocabulary knowledge. The author employed the concurrent triangulation strategy, a mixed research design. The study was conducted at a private school in Izmir, Turkey during the 2012-2013…

  14. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  15. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  16. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  17. Topological Index as a Sorting Device for Coding Chemical Structures

    ERIC Educational Resources Information Center

    Hosoya, Haruo

    1972-01-01

    Although the topological index does not uniquely correspond to the individual structure of a graph, it roughly represents the topological nature of the graph. Examples are given for using the topological index as a first sorting device for coding and retrieving structures, especially of fused polycyclic systems. (14 references) (Author/NH)

  18. NERO- a post-maximum supernova radiation transport code

    NASA Astrophysics Data System (ADS)

    Maurer, I.; Jerkstrand, A.; Mazzali, P. A.; Taubenberger, S.; Hachinger, S.; Kromer, M.; Sim, S.; Hillebrandt, W.

    2011-12-01

    The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a non-local thermodynamic equilibrium (NLTE) treatment of radiation transport has rarely been studied. In this paper, we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand, Fransson & Kozma and to the nebular code of Mazzali et al. All three codes have been developed independently and a comparison provides a valuable opportunity to investigate their reliability. Currently, NERO is one-dimensional and can be used for predicting spectra of synthetic explosion models or for deriving SN properties by spectral modelling. To demonstrate this, we study the spectra of the 'normal' Type Ia supernova (SN Ia) 2005cf between 50 and 350 days after the explosion and identify most of the common SN Ia line features at post-maximum epochs.

  19. Evaluating QR Code Case Studies Using a Mobile Learning Framework

    ERIC Educational Resources Information Center

    Rikala, Jenni

    2014-01-01

    The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…

  20. Dependent video coding using a tree representation of pixel dependencies

    NASA Astrophysics Data System (ADS)

    Amati, Luca; Valenzise, Giuseppe; Ortega, Antonio; Tubaro, Stefano

    2011-09-01

    Motion-compensated prediction induces a chain of coding dependencies between pixels in video. In principle, an optimal selection of encoding parameters (motion vectors, quantization parameters, coding modes) should take into account the whole temporal horizon of a GOP. However, in practical coding schemes, these choices are made on a frame-by-frame basis, thus with a possible loss of performance. In this paper we describe a tree-based model for pixelwise coding dependencies: each pixel in a frame is the child of a pixel in a previous reference frame. We show that some tree structures are more favorable than others from a rate-distortion perspective, e.g., because they entail a large descendance of pixels which are well predicted from a common ancestor. In those cases, a higher quality has to be assigned to pixels at the top of such trees. We promote the creation of these structures by adding a special discount term to the conventional Lagrangian cost adopted at the encoder. The proposed model can be implemented through a double-pass encoding procedure. Specifically, we devise heuristic cost functions to drive the selection of quantization parameters and of motion vectors, which can be readily implemented into a state-of-the-art H.264/AVC encoder. Our experiments demonstrate that coding efficiency is improved for video sequences with low motion, while there are no apparent gains for more complex motion. We argue that this is due to both the presence of complex encoder features not captured by the model, and to the complexity of the source to be encoded.

  1. A new hydrodynamics code for Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Leung, S.-C.; Chu, M.-C.; Lin, L.-M.

    2015-12-01

    A two-dimensional hydrodynamics code for Type Ia supernova (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing, we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric light curves from the hydrodynamics results. We also use a GPU to speed up the computation of some highly repetitive subroutines. We achieve an acceleration of 50 times for some subroutines and a factor of 6 in the global run time.

  2. Beyond a code of ethics: phenomenological ethics for everyday practice.

    PubMed

    Greenfield, Bruce; Jensen, Gail M

    2010-06-01

    Physical therapy, like all health-care professions, governs itself through a code of ethics that defines its obligations of professional behaviours. The code of ethics provides professions with a consistent and common moral language and principled guidelines for ethical actions. Yet, and as argued in this paper, professional codes of ethics have limits applied to ethical decision-making in the presence of ethical dilemmas. Part of the limitations of the codes of ethics is that there is no particular hierarchy of principles that govern in all situations. Instead, the exigencies of clinical practice, the particularities of individual patient's illness experiences and the transformative nature of chronic illnesses and disabilities often obscure the ethical concerns and issues embedded in concrete situations. Consistent with models of expert practice, and with contemporary models of patient-centred care, we advocate and describe in this paper a type of interpretative and narrative approach to moral practice and ethical decision-making based on phenomenology. The tools of phenomenology that are well defined in research are applied and examined in a case that illustrates their use in uncovering the values and ethical concerns of a patient. Based on the deconstruction of this case on a phenomenologist approach, we illustrate how such approaches for ethical understanding can help assist clinicians and educators in applying principles within the context and needs of each patient.

  3. Bio—Cryptography: A Possible Coding Role for RNA Redundancy

    NASA Astrophysics Data System (ADS)

    Regoli, M.

    2009-03-01

    The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. The RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions," are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behavior in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.

  4. RESRAD-ECORISK: A computer code for ecological risk assessment

    SciTech Connect

    Cheng, J.J.

    1995-12-01

    RESRAD-ECORISK is a PC-based computer code developed by Argonne National Laboratory (ANL) to estimate risks from exposure of ecological receptors at sites contaminated with potentially hazardous chemicals. The code is based on and is consistent with the methodologies of RESRAD-CHEM, an ANL-developed computer code for assessments of human health risk. RESRAD-ECORISK uses environmental fate and transport models to estimate contaminant concentrations in environmental media from an initial contaminated soil source and food-web uptake models to estimate contaminant doses to ecological receptors. The dose estimates are then used to estimate a risk for the ecological receptor and to calculate preliminary soil guidelines for reducing risks to acceptable levels. Specifically, RESRAD-ECORISK calculates (1) a species-specific applied daily dose for each contaminant (using species-specific life history information and site-specific environmental media concentrations), (2) an ecological hazard quotient (EHQ) for each contaminant and species, and (3) preliminary soil cleanup criteria for each contaminant and receptor. RESRAD-ECORISK incorporates a user-friendly menu-driven interface, databases and default values for a variety of ecological and chemical parameters, and on-line help for easy operation. The code is sufficiently flexible to simulate different contaminated sites and incorporate site-specific ecological data.

  5. Ensuring quality in the coding process: A key differentiator for the accurate interpretation of safety data

    PubMed Central

    Nair, G. Jaya

    2013-01-01

    Medical coding and dictionaries for clinical trials have seen a wave of change over the past decade where emphasis on more standardized tools for coding and reporting clinical data has taken precedence. Coding personifies the backbone of clinical reporting as safety data reports primarily depend on the coded data. Hence, maintaining an optimum quality of coding is quintessential to the accurate analysis and interpretation of critical clinical data. The perception that medical coding is merely a process of assigning numeric/alphanumeric codes to clinical data needs to be revisited. The significance of quality coding and its impact on clinical reporting has been highlighted in this article. PMID:24010060

  6. Connecting Neural Coding to Number Cognition: A Computational Account

    ERIC Educational Resources Information Center

    Prather, Richard W.

    2012-01-01

    The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…

  7. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  8. Coding Categories To Record Student Talk at a Multimedia Interface.

    ERIC Educational Resources Information Center

    Klinger, S.

    1999-01-01

    Using case study methodology, this paper examines how pairs of first-time high school student users navigated through a multimedia geography CD-ROM together. Discusses coding methods for student activity; student interaction; dialog analysis; multimedia literacy; and further research needs. (Author/LRW)

  9. Codes of Ethics in Australian Education: Towards a National Perspective

    ERIC Educational Resources Information Center

    Forster, Daniella J.

    2012-01-01

    Teachers have a dual moral responsibility as both values educators and moral agents representing the integrity of the profession. Codes of ethics and conduct in teaching articulate shared professional values and aim to provide some guidance for action around recognised issues special to the profession but are also instruments of regulation which…

  10. Validation of a coupled reactive transport code in porous media

    NASA Astrophysics Data System (ADS)

    Mugler, C.; Montarnal, P.; Dimier, A.

    2003-04-01

    The safety assessment of nuclear waste disposals needs to predict the migration of radionuclides and chemical species through a geological medium. It is therefore necessary to develop and assess qualified and validated tools which integrate both the transport mechanisms through the geological media and the chemical mechanisms governing the mobility of radionuclides. In this problem, both geochemical and hydrodynamic phenomena are tightly linked together. That is the reason why the French Nuclear Energy Agency (CEA) and the French Agency for the Management of Radioactive Wastes (ANDRA) are conjointly developping a coupled reactive transport code that solves simultaneously a geochemical model and a transport model. This code, which is part of the software project ALLIANCES, leans on the libraries of two geochemical codes solving the complex ensemble of reacting chemical species: CHESS and PHREEQC. Geochemical processes considered here include ion exchange, redox reactions, acid-base reactions, surface complexation and mineral dissolution and/or precipitation. Transport is simulated using the mixed-hybrid finite element scheme CAST3M or the finite volume scheme MT3D. All together solve Darcy's law and simulate several hydrological processes such as advection, diffusion and dispersion. The coupling algorithm is an iterative sequential algorithm. Several analytical test cases have been defined and used to validate the reactive transport code. Numerical results can be compared to analytical solutions.

  11. RTE: A computer code for Rocket Thermal Evaluation

    NASA Technical Reports Server (NTRS)

    Naraghi, Mohammad H. N.

    1995-01-01

    The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.

  12. Design and implementation of a channel decoder with LDPC code

    NASA Astrophysics Data System (ADS)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  13. The GOES Time Code Service, 1974–2004: A Retrospective

    PubMed Central

    Lombardi, Michael A.; Hanson, D. Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  14. Code-Switching in a College Mathematics Classroom

    ERIC Educational Resources Information Center

    Chitera, Nancy

    2009-01-01

    This paper presents the findings that explored from the discourse practices of the mathematics teacher educators in initial teacher training colleges in Malawi. It examines how mathematics teacher educators construct a multilingual classroom and how they view code-switching. The discussion is based on pre-observation interviews with four…

  15. A high performance spectral code for nonlinear MHD stability

    SciTech Connect

    Taylor, M.

    1992-09-01

    A new spectral code, NSTAB, has been developed to do nonlinear stability and equilibrium calculations for the magnetohydrodynamic (MHD) equations in three dimensional toroidal geometries. The code has the resolution to test nonlinear stability by calculating bifurcated equilibria directly. These equilibria consist of weak solutions with current sheets near rational surfaces and other less localized modes. Bifurcated equilibria with a pronounced current sheet where the rotational transform crosses unity are calculated for the International Thermonuclear Experimental Reactor (ITER). Bifurcated solutions with broader resonances are found for the LHD stellarator currently being built in Japan and an optimized configuration like the Wendelstein VII-X proposed for construction in Germany. The code is able to handle the many harmonics required to capture the high mode number of these instabilities. NSTAB builds on the highly successful BETAS code, which applies the spectral method to a flux coordinate formulation of the variational principle associated with the MHD equilibrium equations. However, a new residue condition for the location of the magnetic axis has been developed and implemented. This condition is based on the weak formulation of the equations and imposes no constraints on the inner flux surfaces.

  16. A semianalytic Monte Carlo code for modelling LIDAR measurements

    NASA Astrophysics Data System (ADS)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  17. A surface definition code for turbine blade surfaces

    SciTech Connect

    Yang, S.L. ); Oryang, D.; Ho, M.J. )

    1992-05-01

    A numerical interpolation scheme has been developed for generating the three-dimensional geometry of wind turbine blades. The numerical scheme consists of (1) creating the frame of the blade through the input of two or more airfoils at some specific spanwise stations and then scaling and twisting them according to the prescribed distributions of chord, thickness, and twist along the span of the blade; (2) transforming the physical coordinates of the blade frame into a computational domain that complies with the interpolation requirements; and finally (3) applying the bi-tension spline interpolation method, in the computational domain, to determine the coordinates of any point on the blade surface. Detailed descriptions of the overall approach to and philosophy of the code development are given along with the operation of the code. To show the usefulness of the bi-tension spline interpolation code developed, two examples are given, namely CARTER and MICON blade surface generation. Numerical results are presented in both graphic data forms. The solutions obtained in this work show that the computer code developed can be a powerful tool for generating the surface coordinates for any three-dimensional blade.

  18. Sparse code of conflict in a primate society

    PubMed Central

    Daniels, Bryan C.; Krakauer, David C.; Flack, Jessica C.

    2012-01-01

    Animals living in groups collectively produce social structure. In this context individuals make strategic decisions about when to cooperate and compete. This requires that individuals can perceive patterns in collective dynamics, but how this pattern extraction occurs is unclear. Our goal is to identify a model that extracts meaningful social patterns from a behavioral time series while remaining cognitively parsimonious by making the fewest demands on memory. Using fine-grained conflict data from macaques, we show that sparse coding, an important principle of neural compression, is an effective method for compressing collective behavior. The sparse code is shown to be efficient, predictive, and socially meaningful. In our monkey society, the sparse code of conflict is composed of related individuals, the policers, and the alpha female. Our results suggest that sparse coding is a natural technique for pattern extraction when cognitive constraints and small sample sizes limit the complexity of inferential models. Our approach highlights the need for cognitive experiments addressing how individuals perceive collective features of social organization. PMID:22891296

  19. Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions

    NASA Astrophysics Data System (ADS)

    Kwak, Kyujin; Yang, Seungwon

    2015-08-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.

  20. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  1. Development of a parallelization strategy for the VARIANT code

    SciTech Connect

    Hanebutte, U.R.; Khalil, H.S.; Palmiotti, G.; Tatsumi, M.

    1996-12-31

    The VARIANT code solves the multigroup steady-state neutron diffusion and transport equation in three-dimensional Cartesian and hexagonal geometries using the variational nodal method. VARIANT consists of four major parts that must be executed sequentially: input handling, calculation of response matrices, solution algorithm (i.e. inner-outer iteration), and output of results. The objective of the parallelization effort was to reduce the overall computing time by distributing the work of the two computationally intensive (sequential) tasks, the coupling coefficient calculation and the iterative solver, equally among a group of processors. This report describes the code`s calculations and gives performance results on one of the benchmark problems used to test the code. The performance analysis in the IBM SPx system shows good efficiency for well-load-balanced programs. Even for relatively small problem sizes, respectable efficiencies are seen for the SPx. An extension to achieve a higher degree of parallelism will be addressed in future work. 7 refs., 1 tab.

  2. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  3. DgSMC-B code: A robust and autonomous direct simulation Monte Carlo code for arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Kargaran, H.; Minuchehr, A.; Zolfaghari, A.

    2016-07-01

    In this paper, we describe the structure of a new Direct Simulation Monte Carlo (DSMC) code that takes advantage of combinatorial geometry (CG) to simulate any rarefied gas flows Medias. The developed code, called DgSMC-B, has been written in FORTRAN90 language with capability of parallel processing using OpenMP framework. The DgSMC-B is capable of handling 3-dimensional (3D) geometries, which is created with first-and second-order surfaces. It performs independent particle tracking for the complex geometry without the intervention of mesh. In addition, it resolves the computational domain boundary and volume computing in border grids using hexahedral mesh. The developed code is robust and self-governing code, which does not use any separate code such as mesh generators. The results of six test cases have been presented to indicate its ability to deal with wide range of benchmark problems with sophisticated geometries such as airfoil NACA 0012. The DgSMC-B code demonstrates its performance and accuracy in a variety of problems. The results are found to be in good agreement with references and experimental data.

  4. Vision aided inertial navigation system augmented with a coded aperture

    NASA Astrophysics Data System (ADS)

    Morrison, Jamie R.

    Navigation through a three-dimensional indoor environment is a formidable challenge for an autonomous micro air vehicle. A main obstacle to indoor navigation is maintaining a robust navigation solution (i.e. air vehicle position and attitude estimates) given the inadequate access to satellite positioning information. A MEMS (micro-electro-mechanical system) based inertial navigation system provides a small, power efficient means of maintaining a vehicle navigation solution; however, unmitigated error propagation from relatively noisy MEMS sensors results in the loss of a usable navigation solution over a short period of time. Several navigation systems use camera imagery to diminish error propagation by measuring the direction to features in the environment. Changes in feature direction provide information regarding direction for vehicle movement, but not the scale of movement. Movement scale information is contained in the depth to the features. Depth-from-defocus is a classic technique proposed to derive depth from a single image that involves analysis of the blur inherent in a scene with a narrow depth of field. A challenge to this method is distinguishing blurriness caused by the focal blur from blurriness inherent to the observed scene. In 2007, MIT's Computer Science and Artificial Intelligence Laboratory demonstrated replacing the traditional rounded aperture with a coded aperture to produce a complex blur pattern that is more easily distinguished from the scene. A key to measuring depth using a coded aperture then is to correctly match the blur pattern in a region of the scene with a previously determined set of blur patterns for known depths. As the depth increases from the focal plane of the camera, the observable change in the blur pattern for small changes in depth is generally reduced. Consequently, as the depth of a feature to be measured using a depth-from-defocus technique increases, the measurement performance decreases. However, a Fresnel zone

  5. A bit allocation method for sparse source coding.

    PubMed

    Kaaniche, Mounir; Fraysse, Aurélia; Pesquet-Popescu, Béatrice; Pesquet, Jean-Christophe

    2014-01-01

    In this paper, we develop an efficient bit allocation strategy for subband-based image coding systems. More specifically, our objective is to design a new optimization algorithm based on a rate-distortion optimality criterion. To this end, we consider the uniform scalar quantization of a class of mixed distributed sources following a Bernoulli-generalized Gaussian distribution. This model appears to be particularly well-adapted for image data, which have a sparse representation in a wavelet basis. In this paper, we propose new approximations of the entropy and the distortion functions using piecewise affine and exponential forms, respectively. Because of these approximations, bit allocation is reformulated as a convex optimization problem. Solving the resulting problem allows us to derive the optimal quantization step for each subband. Experimental results show the benefits that can be drawn from the proposed bit allocation method in a typical transform-based coding application.

  6. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q.; Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  7. On a stochastic approach to a code performance estimation

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.

    2016-06-01

    The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.

  8. The dynamic neural filter: a binary model of spatiotemporal coding.

    PubMed

    Quenet, Brigitte; Horn, David

    2003-02-01

    We describe and discuss the properties of a binary neural network that can serve as a dynamic neural filter (DNF), which maps regions of input space into spatiotemporal sequences of neuronal activity. Both deterministic and stochastic dynamics are studied, allowing the investigation of the stability of spatiotemporal sequences under noisy conditions. We define a measure of the coding capacity of a DNF and develop an algorithm for constructing a DNF that can serve as a source of given codes. On the basis of this algorithm, we suggest using a minimal DNF capable of generating observed sequences as a measure of complexity of spatiotemporal data. This measure is applied to experimental observations in the locust olfactory system, whose reverberating local field potential provides a natural temporal scale allowing the use of a binary DNF. For random synaptic matrices, a DNF can generate very large cycles, thus becoming an efficient tool for producing spatiotemporal codes. The latter can be stabilized by applying to the parameters of the DNF a learning algorithm with suitable margins.

  9. CTCN: Colloid transport code -- nuclear; A user`s manual

    SciTech Connect

    Jain, R.

    1993-09-01

    This report describes the CTCN computer code, designed to solve the equations of transient colloidal transport of radionuclides in porous and fractured media. This Fortran 77 package solves systems of coupled nonlinear differential-algebraic equations with a wide range of boundary conditions. The package uses the Method of Lines technique with a special section which forms finite-difference discretizations in up to four spatial dimensions to automatically convert the system into a set of ordinary differential equations. The CTCN code then solves these equations using a robust, efficient ODE solver. Thus CTCN can be used to solve population balance equations along with the usual transport equations to model colloid transport processes or as a general problem solver to treat up to four-dimensional differential-algebraic systems.

  10. Nexus: a modular workflow management system for quantum simulation codes

    DOE PAGES

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less

  11. Nexus: a modular workflow management system for quantum simulation codes

    SciTech Connect

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  12. Nexus: A modular workflow management system for quantum simulation codes

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  13. A Plastic Temporal Brain Code for Conscious State Generation

    PubMed Central

    Dresp-Langley, Birgitta; Durup, Jean

    2009-01-01

    Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms. PMID:19644552

  14. CHOLLA: A NEW MASSIVELY PARALLEL HYDRODYNAMICS CODE FOR ASTROPHYSICAL SIMULATION

    SciTech Connect

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-15

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳256{sup 3}) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  15. CBEAM. 2-D: a two-dimensional beam field code

    SciTech Connect

    Dreyer, K.A.

    1985-05-01

    CBEAM.2-D is a two-dimensional solution of Maxwell's equations for the case of an electron beam propagating through an air medium. Solutions are performed in the beam-retarded time frame. Conductivity is calculated self-consistently with field equations, allowing sophisticated dependence of plasma parameters to be handled. A unique feature of the code is that it is implemented on an IBM PC microcomputer in the BASIC language. Consequently, it should be available to a wide audience.

  16. A chemical reaction network solver for the astrophysics code NIRVANA

    NASA Astrophysics Data System (ADS)

    Ziegler, U.

    2016-02-01

    Context. Chemistry often plays an important role in astrophysical gases. It regulates thermal properties by changing species abundances and via ionization processes. This way, time-dependent cooling mechanisms and other chemistry-related energy sources can have a profound influence on the dynamical evolution of an astrophysical system. Modeling those effects with the underlying chemical kinetics in realistic magneto-gasdynamical simulations provide the basis for a better link to observations. Aims: The present work describes the implementation of a chemical reaction network solver into the magneto-gasdynamical code NIRVANA. For this purpose a multispecies structure is installed, and a new module for evolving the rate equations of chemical kinetics is developed and coupled to the dynamical part of the code. A small chemical network for a hydrogen-helium plasma was constructed including associated thermal processes which is used in test problems. Methods: Evolving a chemical network within time-dependent simulations requires the additional solution of a set of coupled advection-reaction equations for species and gas temperature. Second-order Strang-splitting is used to separate the advection part from the reaction part. The ordinary differential equation (ODE) system representing the reaction part is solved with a fourth-order generalized Runge-Kutta method applicable for stiff systems inherent to astrochemistry. Results: A series of tests was performed in order to check the correctness of numerical and technical implementation. Tests include well-known stiff ODE problems from the mathematical literature in order to confirm accuracy properties of the solver used as well as problems combining gasdynamics and chemistry. Overall, very satisfactory results are achieved. Conclusions: The NIRVANA code is now ready to handle astrochemical processes in time-dependent simulations. An easy-to-use interface allows implementation of complex networks including thermal processes

  17. A reference manual for the Event Progression Analysis Code (EVNTRE)

    SciTech Connect

    Griesmeyer, J.M.; Smith, L.N.

    1989-09-01

    This document is a reference guide for the Event Progression Analysis (EVNTRE) code developed at Sandia National Laboratories. EVNTRE is designed to process the large accident progression event trees and associated files used in probabilistic risk analyses for nuclear power plants. However, the general nature of EVNTRE makes it applicable to a wide variety of analyses that involve the investigation of a progression of events which lead to a large number of sets of conditions or scenarios. The EVNTRE code efficiently processes large, complex event trees. It has the capability to assign probabilities to event tree branch points in several different ways, to classify pathways or outcomes into user-specified groupings, and to sample input distributions of probabilities and parameters.

  18. RAMSES-CH: a new chemodynamical code for cosmological simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Courty, S.; Gibson, B. K.; Kawata, D.; Calura, F.; Teyssier, R.

    2012-07-01

    We present a new chemodynamical code -RAMSES-CH- for use in simulating the self-consistent evolution of chemical and hydrodynamical properties of galaxies within a fully cosmological framework. We build upon the adaptive mesh refinement code RAMSES, which includes a treatment of self-gravity, hydrodynamics, star formation, radiative cooling and supernova feedback, to trace the dominant isotopes of C, N, O, Ne, Mg, Si and Fe. We include the contribution of Type Ia and Type II supernovae, in addition to low- and intermediate-mass asymptotic giant branch stars, relaxing the instantaneous recycling approximation. The new chemical evolution modules are highly flexible and portable, lending themselves to ready exploration of variations in the underpinning stellar and nuclear physics. We apply RAMSES-CH to the cosmological simulation of a typical L★ galaxy, demonstrating the successful recovery of the basic empirical constraints regarding [α/Fe]-[Fe/H] and Type Ia/II supernova rates.

  19. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  20. Design of a coded aperture Compton telescope imaging system (CACTIS)

    NASA Astrophysics Data System (ADS)

    Volkovskii, Alexander; Clajus, Martin; Gottesman, Stephen R.; Malik, Hans; Schwartz, Kenneth; Tumer, Evren; Tumer, Tumay; Yin, Shi

    2010-08-01

    We have developed a prototype of a scalable high-resolution direction and energy sensitive gamma-ray detection system that operates in both coded aperture (CA) and Compton scatter (CS) modes to obtain optimal efficiency and angular resolution over a wide energy range. The design consists of an active coded aperture constructed from 52 individual CZT planar detectors each measuring 3×3×6 mm3 arranged in a MURA pattern on a 10×10 grid, with a monolithic 20×20×5 mm3 pixelated (8×8) CZT array serving as the focal plane. The combined mode is achieved by using the aperture plane array for both Compton scattering of high-energy photons and as a coded mask for low-energy radiation. The prototype instrument was built using two RENA-3 test systems, one each for the aperture and the focal plane, stacked on top of each other at a distance of 130 mm. The test systems were modified to coordinate (synchronize) readout and provide coincidence information of events within a user-adjustable 40-1,280 ns window. The measured angular resolution of the device is <1 deg (17 mrad) in CA mode and is predicted to be approximately 3 deg (54 mrad) in CS mode. The energy resolution of the CZT detectors is approximately 5% FWHM at 120 keV. We will present details of the system design and initial results for the calibration and performance of the prototype.

  1. A Search for Core Values: Towards a Model Code of Ethics for Information Professionals.

    ERIC Educational Resources Information Center

    Koehler, Wallace C.; Pemberton, J. Michael

    2000-01-01

    Examines ethical codes and standards of professional practice promulgated by diverse associations of information professionals from varied national outlooks to identify a core set of ethical principles. Offers a model code based on a textual consensus of those ethical codes and standards examined. Three appendices provide information on…

  2. Reinvestigation of moving punctured black holes with a new code

    SciTech Connect

    Cao Zhoujian; Yo Hweijang; Yu Juiping

    2008-12-15

    We report on our code, in which the moving puncture method is applied and an adaptive/fixed mesh refinement is implemented, and on its preliminary performance on black hole simulations. Based on the Baumgarte-Sharpiro-Shibata-Nakamura (BSSN) formulation, up-to-date gauge conditions and the modifications of the formulation are also implemented and tested. In this work, we present our primary results about the simulation of a single static black hole, of a moving single black hole, and of the head-on collision of a binary black hole system. For the static punctured black hole simulations, different modifications of the BSSN formulation are applied. It is demonstrated that both the currently used sets of modifications lead to a stable evolution. For cases of a moving punctured black hole with or without spin, we search for viable gauge conditions and study the effect of spin on the black hole evolution. Our results confirm previous results obtained by other research groups. In addition, we find a new gauge condition, which has not yet been adopted by any other researchers, which can also give stable and accurate black hole evolution calculations. We examine the performance of the code for the head-on collision of a binary black hole system, and the agreement of the gravitational waveform it produces with that obtained in other works. In order to understand qualitatively the influence of matter on the binary black hole collisions, we also investigate the same head-on collision scenarios but perturbed by a scalar field. The numerical simulations performed with this code not only give stable and accurate results that are consistent with the works by other numerical relativity groups, but also lead to the discovery of a new viable gauge condition, as well as clarify some ambiguities in the modification of the BSSN formulation. These results demonstrate that this code is reliable and ready to be used in the study of more realistic astrophysical scenarios and of numerical

  3. A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics

    SciTech Connect

    Johnson, Jeffrey

    2006-04-01

    Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Because it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.

  4. RHALE: A 3-D MMALE code for unstructured grids

    SciTech Connect

    Peery, J.S.; Budge, K.G.; Wong, M.K.W.; Trucano, T.G.

    1993-08-01

    This paper describes RHALE, a multi-material arbitrary Lagrangian-Eulerian (MMALE) shock physics code. RHALE is the successor to CTH, Sandia`s 3-D Eulerian shock physics code, and will be capable of solving problems that CTH cannot adequately address. We discuss the Lagrangian solid mechanics capabilities of RHALE, which include arbitrary mesh connectivity, superior artificial viscosity, and improved material models. We discuss the MMALE algorithms that have been extended for arbitrary grids in both two- and three-dimensions. The MMALE addition to RHALE provides the accuracy of a Lagrangian code while allowing a calculation to proceed under very large material distortions. Coupling an arbitrary quadrilateral or hexahedral grid to the MMALE solution facilitates modeling of complex shapes with a greatly reduced number of computational cells. RHALE allows regions of a problem to be modeled with Lagrangian, Eulerian or ALE meshes. In addition, regions can switch from Lagrangian to ALE to Eulerian based on user input or mesh distortion. For ALE meshes, new node locations are determined with a variety of element based equipotential schemes. Element quantities are advected with donor, van Leer, or Super-B algorithms. Nodal quantities are advected with the second order SHALE or HIS algorithms. Material interfaces are determined with a modified Young`s high resolution interface tracker or the SLIC algorithm. RHALE has been used to model many problems of interest to the mechanics, hypervelocity impact, and shock physics communities. Results of a sampling of these problems are presented in this paper.

  5. A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics

    2006-04-01

    Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Becausemore » it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.« less

  6. Modulation and coding used by a major satellite communications company

    NASA Technical Reports Server (NTRS)

    Renshaw, K. H.

    1992-01-01

    Hughes Communications Inc., is a major satellite communications company providing or planning to provide the full spectrum of services available on satellites. All of the current services use conventional modulation and coding techniques that were well known a decade or longer ago. However, the future mobile satellite service will use significantly more advanced techniques. JPL, under NASA sponsorship, has pioneered many of the techniques that will be used.

  7. A signature of neural coding at human perceptual limits

    PubMed Central

    Bays, Paul M.

    2016-01-01

    Simple visual features, such as orientation, are thought to be represented in the spiking of visual neurons using population codes. I show that optimal decoding of such activity predicts characteristic deviations from the normal distribution of errors at low gains. Examining human perception of orientation stimuli, I show that these predicted deviations are present at near-threshold levels of contrast. The findings may provide a neural-level explanation for the appearance of a threshold in perceptual awareness whereby stimuli are categorized as seen or unseen. As well as varying in error magnitude, perceptual judgments differ in certainty about what was observed. I demonstrate that variations in the total spiking activity of a neural population can account for the empirical relationship between subjective confidence and precision. These results establish population coding and decoding as the neural basis of perception and perceptual confidence. PMID:27604067

  8. Development of a massively parallel parachute performance prediction code

    SciTech Connect

    Peterson, C.W.; Strickland, J.H.; Wolfe, W.P.; Sundberg, W.D.; McBride, D.D.

    1997-04-01

    The Department of Energy has given Sandia full responsibility for the complete life cycle (cradle to grave) of all nuclear weapon parachutes. Sandia National Laboratories is initiating development of a complete numerical simulation of parachute performance, beginning with parachute deployment and continuing through inflation and steady state descent. The purpose of the parachute performance code is to predict the performance of stockpile weapon parachutes as these parachutes continue to age well beyond their intended service life. A new massively parallel computer will provide unprecedented speed and memory for solving this complex problem, and new software will be written to treat the coupled fluid, structure and trajectory calculations as part of a single code. Verification and validation experiments have been proposed to provide the necessary confidence in the computations.

  9. A generic scheme for progressive point cloud coding.

    PubMed

    Huang, Yan; Peng, Jingliang; Kuo, C-C Jay; Gopi, M

    2008-01-01

    In this paper, we propose a generic point cloud encoder that provides a unified framework for compressing different attributes of point samples corresponding to 3D objects with arbitrary topology. In the proposed scheme, the coding process is led by an iterative octree cell subdivision of the object space. At each level of subdivision, positions of point samples are approximated by the geometry centers of all tree-front cells while normals and colors are approximated by their statistical average within each of tree-front cells. With this framework, we employ attribute-dependent encoding techniques to exploit different characteristics of various attributes. All of these have led to significant improvement in the rate-distortion (R-D) performance and a computational advantage over the state of the art. Furthermore, given sufficient levels of octree expansion, normal space partitioning and resolution of color quantization, the proposed point cloud encoder can be potentially used for lossless coding of 3D point clouds.

  10. Universal transversal gates with color codes: A simplified approach

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Beverland, Michael E.

    2015-03-01

    We provide a simplified yet rigorous presentation of the ideas from Bombín's paper (arXiv:1311.0879v3). Our presentation is self-contained, and assumes only basic concepts from quantum error correction. We provide an explicit construction of a family of color codes in arbitrary dimensions and describe some of their crucial properties. Within this framework, we explicitly show how to transversally implement the generalized phase gate Rn=diag(1 ,e2 π i /2n) , which deviates from the method in the aforementioned paper, allowing an arguably simpler proof. We describe how to implement the Hadamard gate H fault tolerantly using code switching. In three dimensions, this yields, together with the transversal controlled-not (cnot), a fault-tolerant universal gate set {H ,c n o t ,R3} without state distillation.

  11. A signature of neural coding at human perceptual limits.

    PubMed

    Bays, Paul M

    2016-09-01

    Simple visual features, such as orientation, are thought to be represented in the spiking of visual neurons using population codes. I show that optimal decoding of such activity predicts characteristic deviations from the normal distribution of errors at low gains. Examining human perception of orientation stimuli, I show that these predicted deviations are present at near-threshold levels of contrast. The findings may provide a neural-level explanation for the appearance of a threshold in perceptual awareness whereby stimuli are categorized as seen or unseen. As well as varying in error magnitude, perceptual judgments differ in certainty about what was observed. I demonstrate that variations in the total spiking activity of a neural population can account for the empirical relationship between subjective confidence and precision. These results establish population coding and decoding as the neural basis of perception and perceptual confidence. PMID:27604067

  12. A user guide for the EMTAC-MZ CFD code

    NASA Technical Reports Server (NTRS)

    Szema, Kuo-Yen; Chakravarthy, Sukumar R.

    1990-01-01

    The computer code (EMTAC-MZ) was applied to investigate the flow field over a variety of very complex three-dimensional (3-D) configurations across the Mach number range (subsonic, transonic, supersonic, and hypersonic flow). In the code, a finite volume, multizone implementation of high accuracy, total variation diminishing (TVD) formulation (based on Roe's scheme) is used to solve the unsteady Euler equations. In the supersonic regions of the flow, an infinitely large time step and a space-marching scheme is employed. A finite time step and a relaxation or 3-D approximate factorization method is used in subsonic flow regions. The multizone technique allows very complicated configurations to be modeled without geometry modifications, and can easily handle combined internal and external flow problems. An elliptic grid generation package is built into the EMTAC-MZ code. To generate the computational grid, only the surface geometry data are required. Results obtained for a variety of configurations, such as fighter-like configurations (F-14, AVSTOL), flow through inlet, multi-bodies (shuttle with external tank and SRBs), are reported and shown to be in good agreement with available experimental data.

  13. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  14. Development of a coded 16-ary CPFSK coherent demodulator

    NASA Technical Reports Server (NTRS)

    Clarke, Ken; Davis, Robert; Roesch, Jim

    1988-01-01

    Theory and hardware are described for a proof-of-concept 16-ary continuous phase frequency shift keying (16-CPFSK) digital modem. The 16 frequencies are spaced every 1/16th baud rate for 2 bits/sec/Hz operation. Overall rate 3/4 convolutional coding is incorporated. The demodulator differs significantly from typical quadrature phase detector approaches in that phase is coherently measured by processing the baseband output of a frequency discriminator. Baud rate phase samples from the baseband processor are decoded to yield the original data stream. The method of encoding onto the 16-ary phase nodes, together with convolutional coding gain, results in near quad PSK (QPSK) performance. The modulated signal is of constant envelope; thus the power amplifier can be saturated for peak performance. The spectrum is inherently bandlimited and requires no RF filter.

  15. Specifications of a Plasmasphere Modeling Code for GGCM

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Ober, D.

    2000-01-01

    The Dynamic Global Core Plasma Model (DGCPM) is a parameterized model for core or thermal plasma in the magnetosphere. The model accounts for dayside ionospheric outflow and nightside inflow. It accounts for the global pattern of convection and corotation. The model is capable of being coupled to ring current and superthermal electron models for the purpose of providing thermal plasma spatial distributions and for the purpose of accepting the dynamic influences of these plasma populations back upon the thermal plasma. The DGCPM is designed to operate alone or to operate as part of a larger integrated package. The convection electric field and magnetic field used within the DGCPM can be shared with models of other plasma populations, in addition to the exchange of parameters important to the collective modeling of whole plasma systems in the inner magnetosphere. This talk will present the features of the DGCPM model code and the various forms of information that can be exchanged with other cooperating codes.

  16. The genetic code as a periodic table: algebraic aspects.

    PubMed

    Bashford, J D; Jarvis, P D

    2000-01-01

    The systematics of indices of physico-chemical properties of codons and amino acids across the genetic code are examined. Using a simple numerical labelling scheme for nucleic acid bases, A=(-1,0), C=(0,-1), G=(0,1), U=(1,0), data can be fitted as low order polynomials of the six coordinates in the 64-dimensional codon weight space. The work confirms and extends the recent studies by Siemion et al. (1995. BioSystems 36, 231-238) of the conformational parameters. Fundamental patterns in the data such as codon periodicities, and related harmonics and reflection symmetries, are here associated with the structure of the set of basis monomials chosen for fitting. Results are plotted using the Siemion one-step mutation ring scheme, and variants thereof. The connections between the present work, and recent studies of the genetic code structure using dynamical symmetry algebras, are pointed out.

  17. E coding: a missing link for injury prevention.

    PubMed

    Halpern, J S

    1993-06-01

    E codes are a practical, detailed, and feasible method of collecting much needed information about trauma morbidity. Emergency care providers are the key to ensuring accurate information because of their ability to obtain specific information from prehospital personnel or family. Charting the information on the emergency record will simplify the task for medical records coders, researchers, epidemiologists, and public health officials. The detailed information used for E codes is not just research trivia, but rather beneficial information for all emergency providers. A specific plan of care can be developed to address the medical and social needs of each patient. This in turn may help to reduce future injuries, whether they are caused by high-risk behaviors or repetitive abuse situations.

  18. A cosmological hydrodynamic code based on the piecewise parabolic method

    NASA Astrophysics Data System (ADS)

    Gheller, Claudio; Pantano, Ornella; Moscardini, Lauro

    1998-04-01

    We present a hydrodynamical code for cosmological simulations which uses the piecewise parabolic method (PPM) to follow the dynamics of the gas component and an N-body particle-mesh algorithm for the evolution of the collisionless component. The gravitational interaction between the two components is regulated by the Poisson equation, which is solved by a standard fast Fourier transform (FFT) procedure. In order to simulate cosmological flows, we have introduced several modifications to the original PPM scheme, which we describe in detail. Various tests of the code are presented, including adiabatic expansion, single and multiple pancake formation, and three-dimensional cosmological simulations with initial conditions based on the cold dark matter scenario.

  19. Direct Calculations of Current Drive with a Full Wave Code

    NASA Astrophysics Data System (ADS)

    Wright, John C.; Phillips, Cynthia K.

    1997-11-01

    We have developed a current drive package that evaluates the current driven by fast magnetosonic waves in arbitrary flux geometry. An expression for the quasilinear flux has been derived which accounts for coupling between modes in the spectrum of waves launched from the antenna. The field amplitudes are calculated in the full wave code, FISIC, and the current response function, \\chi, also known as the Spitzer function, is determined with Charles Karney's Fokker-Planck code, adj.f. Both codes have been modified to incorporate the same numerical equilibria. To model the effects of a trapped particle population, the bounce averaged equations for current and power are used, and the bounce averaged flux is calculated. The computer model is benchmarked against the homogenous equations for a high aspect ratio case in which the expected agreement is confirmed. Results from cases for TFTR, NSTX and CDX-U are contrasted with the predictions of the Ehst-Karney parameterization of current drive for circular equilibria. For theoretical background, please see the authors' <A HREF=http://w3.pppl.gov/ jwright/Publications>archiveA> of papers. (http://w3.pppl.gov/ ~jwright/Publications)

  20. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  1. BLSTA: A boundary layer code for stability analysis

    NASA Technical Reports Server (NTRS)

    Wie, Yong-Sun

    1992-01-01

    A computer program is developed to solve the compressible, laminar boundary-layer equations for two-dimensional flow, axisymmetric flow, and quasi-three-dimensional flows including the flow along the plane of symmetry, flow along the leading-edge attachment line, and swept-wing flows with a conical flow approximation. The finite-difference numerical procedure used to solve the governing equations is second-order accurate. The flow over a wide range of speed, from subsonic to hypersonic speed with perfect gas assumption, can be calculated. Various wall boundary conditions, such as wall suction or blowing and hot or cold walls, can be applied. The results indicate that this boundary-layer code gives velocity and temperature profiles which are accurate, smooth, and continuous through the first and second normal derivatives. The code presented herein can be coupled with a stability analysis code and used to predict the onset of the boundary-layer transition which enables the assessment of the laminar flow control techniques. A user's manual is also included.

  2. The Use of a Pseudo Noise Code for DIAL Lidar

    NASA Astrophysics Data System (ADS)

    Burris, J.; Sun, X.; Abshire, J. B.

    2010-12-01

    Retrievals of CO2 profiles within the planetary boundary layer (PBL) are required to understand CO2 transport over regional scales and for validating the future space borne CO2 remote sensing instrument, such as the CO2 Laser Sounder, for the ASCENDS mission. We report the use of a return-to- zero (RZ) pseudo noise (PN) code modulation technique for making range resolved measurements of CO2 within the PBL using commercial, off-the-shelf, components. Conventional, range resolved, measurements require laser pulse widths that are shorter than the desired spatial resolution and have pulse spacing such that returns from only a single pulse are observed by the receiver at one time (for the PBL pulse separations must be >~2000m). This imposes a serious limitation when using available fiber lasers because of the resulting low duty cycle (<0.001) and consequent low average laser output power. RZ PN code modulation enables a fiber laser to operate at much higher duty cycles (approaching 0.1) thereby more effectively utilizing the amplifier’s output. This results in an increase in received counts by approximately two orders of magnitude. The approach involves employing two, back to back, CW fiber amplifiers seeded at the appropriate on and offline CO2 wavelengths (~1572 nm) using distributed feedback diode lasers modulated by a PN code at rates significantly above 1 megahertz. An assessment of the technique, discussions of measurement precision and error sources as well as preliminary data will be presented.

  3. A framework for control simulations using the TRANSP code

    NASA Astrophysics Data System (ADS)

    Boyer, Mark D.; Andre, Rob; Gates, David; Gerhardt, Stefan; Goumiri, Imene; Menard, Jon

    2014-10-01

    The high-performance operational goals of present-day and future tokamaks will require development of advanced feedback control algorithms. Though reduced models are often used for initial designs, it is important to study the performance of control schemes with integrated models prior to experimental implementation. To this end, a flexible framework for closed loop simulations within the TRANSP code is being developed. The framework exploits many of the predictive capabilities of TRANSP and provides a means for performing control calculations based on user-supplied data (controller matrices, target waveforms, etc.). These calculations, along with the acquisition of ``real-time'' measurements and manipulation of TRANSP internal variables based on actuator requests, are implemented through a hook that allows custom run-specific code to be inserted into the standard TRANSP source code. As part of the framework, a module has been created to constrain the thermal stored energy in TRANSP using a confinement scaling expression. Progress towards feedback control of the current profile on NSTX-U will be presented to demonstrate the framework. Supported in part by an appointment to the U.S. Department of Energy Fusion Energy Postdoctoral Research Program administered by the Oak Ridge Institute for Science and Education.

  4. Probability of undetected error after decoding for a concatenated coding scheme

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.

  5. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  6. RESRAD: A computer code for evaluating radioactively contaminated sites

    SciTech Connect

    Yu, C.; Zielen, A.J.; Cheng, J.J.

    1993-12-31

    This document briefly describes the uses of the RESRAD computer code in calculating site-specific residual radioactive material guidelines and radiation dose-risk to an on-site individual (worker or resident) at a radioactively contaminated site. The adoption by the DOE in order 5400.5, pathway analysis methods, computer requirements, data display, the inclusion of chemical contaminants, benchmarking efforts, and supplemental information sources are all described. (GHH)

  7. Code System For Calculating Reactivity Transients In a LWR.

    1999-03-16

    Version 00 RETRANS is appropriate to calculate power excursions in light water reactors initiated by reactivity insertions due to withdrawal of control elements. The neutron physical model is based on the time-dependent two-group neutron diffusion equations. The equation of state of the coolant is approximated by a table built into the code. RETRANS solves the heat conduction equation and calculates the heat transfer coefficient for representative fuel rods at each time-step.

  8. A Radiation Solver for the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    2015-01-01

    A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.

  9. FAST{_}AD Code Verification: A Comparison to ADAMS

    SciTech Connect

    Buhl, M.L.; Wright, A.D.; Pierce, K.G.

    2001-02-15

    The National Renewable Energy Laboratory's National Wind Technology Center (NWTC) has refocused its wind turbine design-code comparison effort to verify FAST{_}AD with ADAMS. FAST{_}AD is a wind turbine structural-response code developed by Oregon State University for the NWTC. ADAMS is a commercial, general-purpose, multibody-dynamics code developed by Mechanical Dynamics, Inc. ADAMS, which is used in many industries, has been rigorously tested. Both ADAMS and FAST{_}AD use the AeroDyn subroutine package for calculating aerodynamic forces. The University of Utah developed AeroDyn for the NWTC. To compare FAST{_}AD to ADAMS, we modeled a rough approximation of the AWT-27 P4 turbine, using the same properties for both simulators. The AWT-27 is a 275-kilowatt (kW), two-bladed wind turbine. We also created three-bladed versions of the turbine models to verify FAST{_}AD for three-bladed turbines. In this paper, we list the aerodynamic features used in the comparison. We also explain how the programs model the turbine structure, describe the degrees of freedom (DOFs) used for this study, and present simulation comparisons that show very good agreement.

  10. DANTSYS: A diffusion accelerated neutral particle transport code system

    SciTech Connect

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  11. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  12. A Test on a Bilingual Dual Coding Hypothesis in Japanese-English Bilinguals.

    ERIC Educational Resources Information Center

    Taura, Hideyuki

    A study investigated the effects of second language (L2) acquisition age, length of L2 exposure, and gender on bilingual coding, and examined whether the bilingual dual coding effect in incidental recalls would be the same as in Indo-European languages. The bilingual dual coding hypothesis proposes that the individual's image system and the two…

  13. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel. [CONDOR code

    SciTech Connect

    Chen, K.F.; Olson, C.A.

    1983-09-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction.

  14. SULEC: Benchmarking a new ALE finite-element code

    NASA Astrophysics Data System (ADS)

    Buiter, S.; Ellis, S.

    2012-04-01

    We have developed a 2-D/3-D arbitrary lagrangian-eulerian (ALE) finite-element code, SULEC, based on known techniques from literature. SULEC is successful in tackling many of the problems faced by numerical models of lithosphere and mantle processes, such as the combination of viscous, elastic, and plastic rheologies, the presence of a free surface, the contrast in viscosity between lithosphere and the underlying asthenosphere, and the occurrence of large deformations including viscous flow and offset on shear zones. The aim of our presentation is (1) to describe SULEC, and (2) to present a set of analytical and numerical benchmarks that we use to continuously test our code. SULEC solves the incompressible momentum equation coupled with the energy equation. It uses a structured mesh that is built of quadrilateral or brick elements that can vary in size in all dimensions, allowing to achieve high resolutions where required. The elements are either linear in velocity with constant pressure, or quadratic in velocity with linear pressure. An accurate pressure field is obtained through an iterative penalty (Uzawa) formulation. Material properties are carried on tracer particles that are advected through the Eulerian mesh. Shear elasticity is implemented following the approach of Moresi et al. [J. Comp. Phys. 184, 2003], brittle materials deform following a Drucker-Prager criterion, and viscous flow is by temperature- and pressure-dependent power-law creep. The top boundary of our models is a true free surface (with free surface stabilisation) on which simple surface processes models may be imposed. We use a set of benchmarks that test viscous, viscoelastic, elastic and plastic deformation, temperature advection and conduction, free surface behaviour, and pressure computation. Part of our benchmark set is automated allowing easy testing of new code versions. Examples include Poiseuille flow, Couette flow, Stokes flow, relaxation of viscous topography, viscous pure shear

  15. Exploration of Extreme Mass Ratio Inspirals with a Tree Code

    NASA Astrophysics Data System (ADS)

    Miller, Michael

    Extreme mass ratio inspirals (EMRIs), in which a stellar-mass object spirals into a supermassive black hole, are critical gravitational wave sources for the Laser Interferometer Space Antenna (LISA) because of their potential as precise probes of strong gravity. They are although thought to contribute to the flares observed in a few active galactic nuclei that have been attributed to tidal disruption of stars. There are, however, large uncertainties about the rates and properties of EMRIs. The reason is that their galactic nuclear environments contain millions of stars around a central massive object, and their paths must be integrated with great precision to include properly effects such as secular resonances, which accumulate over many orbits. Progress is being made on all fronts, but current numerical options are either profoundly computationally intensive (direct N-body integrators, which in addition do not currently have the needed long-term accuracy) or require special symmetry or other simplifications that may compromise the realism of the results (Monte Carlo and Fokker-Planck codes). We propose to undertake extensive simulations of EMRIs using tree codes that we have adapted to the problem. Tree codes are much faster than direct N-body simulations, yet they are powerful and flexible enough to include nonideal physics such as triaxiality, arbitrary mass spectra, post-Newtonian corrections, and secular evolutionary effects such as resonant relaxation and Kozai oscillations to the equations of motion. We propose to extend our codes to include these effects and to allow separate tracking of special ? that will represent binaries, thus allowing us to follow their interactions and evolution. In our development we will compare our results for a few tens of thousands of particles with a state of the art direct N-body integrator, to evaluate the accuracy of our code and discern systematic effects. This will allow detailed yet fast examinations of large-N systems

  16. Sharing code.

    PubMed

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  17. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  18. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  19. A LONE code for the sparse control of quantum systems

    NASA Astrophysics Data System (ADS)

    Ciaramella, G.; Borzì, A.

    2016-03-01

    In many applications with quantum spin systems, control functions with a sparse and pulse-shaped structure are often required. These controls can be obtained by solving quantum optimal control problems with L1-penalized cost functionals. In this paper, the MATLAB package LONE is presented aimed to solving L1-penalized optimal control problems governed by unitary-operator quantum spin models. This package implements a new strategy that includes a globalized semi-smooth Krylov-Newton scheme and a continuation procedure. Results of numerical experiments demonstrate the ability of the LONE code in computing accurate sparse optimal control solutions.

  20. Development of a predictive code for aircrew radiation exposure.

    PubMed

    McCall, M J; Lemay, F; Bean, M R; Lewis, B J; Bennett, L G I

    2009-10-01

    Using the empirical data measured by the Royal Military College with a tissue equivalent proportional counter, a model was derived to allow for the interpolation of the dose rate for any global position, altitude and date. Through integration of the dose-rate function over a great circle flight path or between various waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAire) was further developed to provide an estimate of the total dose equivalent on any route worldwide at any period in the solar cycle.

  1. Nyx: A MASSIVELY PARALLEL AMR CODE FOR COMPUTATIONAL COSMOLOGY

    SciTech Connect

    Almgren, Ann S.; Bell, John B.; Lijewski, Mike J.; Lukic, Zarija; Van Andel, Ethan

    2013-03-01

    We present a new N-body and gas dynamics code, called Nyx, for large-scale cosmological simulations. Nyx follows the temporal evolution of a system of discrete dark matter particles gravitationally coupled to an inviscid ideal fluid in an expanding universe. The gas is advanced in an Eulerian framework with block-structured adaptive mesh refinement; a particle-mesh scheme using the same grid hierarchy is used to solve for self-gravity and advance the particles. Computational results demonstrating the validation of Nyx on standard cosmological test problems, and the scaling behavior of Nyx to 50,000 cores, are presented.

  2. Improving the Capabilities of a Continuum Laser Plasma Interaction Code

    SciTech Connect

    Hittinger, J F; Dorr, M R

    2006-06-15

    The numerical simulation of plasmas is a critical tool for inertial confinement fusion (ICF). We have been working to improve the predictive capability of a continuum laser plasma interaction code pF3d, which couples a continuum hydrodynamic model of an unmagnetized plasma to paraxial wave equations modeling the laser light. Advanced numerical techniques such as local mesh refinement, multigrid, and multifluid Godunov methods have been adapted and applied to nonlinear heat conduction and to multifluid plasma models. We describe these algorithms and briefly demonstrate their capabilities.

  3. Radiative Transport for a Smoothed Particle Hydrodynamic Code

    NASA Astrophysics Data System (ADS)

    Lang, Bernd; Kessel-Deynet, Olaf; Burkert, Andreas

    One crude approximation to describe the effect of Radiative Transport in SPH simulations is to introduce a density dependent polytropic index in the equation of state (Matthew R. Bate 1998), which is larger than one if the medium becomes optically thick. By doing this one fixes the system to a special density-temperature dependence. But in principle the system should have the possibility to realize a variety of different density-temperature dependencies if radiative transport is involved and arbitrary heating and cooling functions can be used. We combine the advantages of the SPH Code with an algorithm describing a flux limited diffusive radiative transport to develop a RHD-Code. Flux limited diffusion involves the Rosseland-means of the absorption and scattering coefficients. To calculate this coefficients we use the model from Preibisch et al. 1993. This will restrict our simulations to low temperatures (T <= 1000 K) and high densities (ρ >= 103 cm-3) but on the other hand keeps the code as simple and as fast as possible. For a given energy-density distribution, the radiation field evolves towards the equilibrium solution on a time-scale much smaller than the typical dynamical time-step for the hydrodynamic equations. So the RT equations have to be solved implicit. To do this we use the nice convergence features of the Successive Over-Relaxing (SOR) method. The focus of the simulations than will be on the prestellar phase where molecular cloud cores become optically thick. The central temperature is still low (T = 10 dots 500 K) and thus the ionization and dissociation degree is low and nearly constant.

  4. Analyzing a School Dress Code in a Junior High School: A Set of Exercises.

    ERIC Educational Resources Information Center

    East, Maurice A.; And Others

    Five exercises based on a sample school dress code were designed from a political science perspective to help students develop skills in analyzing issues. The exercises are intended to be used in five or more class periods. In the first exercise, students read a sample dress code and name groups of people who might have opinions about it. In…

  5. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  6. Home energy ratings and energy codes -- A marriage that should work

    SciTech Connect

    Verdict, M.E.; Fairey, P.W.; DeWein, M.C.

    1998-07-01

    This paper examines how voluntary home energy ratings systems (HERS) can be married to mandatory energy codes to increase code compliance while providing added benefits to consumers, builders, and code officials. Effective code enforcement and compliance is a common problem for state and local jurisdictions attempting to reduce energy consumption and increase housing affordability. Reasons frequently cited for energy code noncompliance are: (1) builder resistance to government regulations and change in building practices; (2) the perceived complexity of the code; (3) a lack of familiarity of energy impacts by cod officials and the housing industry, and (4) inadequate government resources for enforcement. By combing ratings and codes, one can create a win-win approach for code officials and energy rating organizations, the housing industry, as well as consumers who wish to reduce air pollution and energy waste. Additionally, state and local government experiences where the marriage between codes and ratings has begun are highlighted and the barriers and benefits assessed.

  7. The Use of a Pseudo Noise Code for DIAL Lidar

    NASA Technical Reports Server (NTRS)

    Burris, John F.

    2010-01-01

    Retrievals of CO2 profiles within the planetary boundary layer (PBL) are required to understand CO2 transport over regional scales and for validating the future space borne CO2 remote sensing instrument, such as the CO2 Laser Sounder, for the ASCENDS mission, We report the use of a return-to-zero (RZ) pseudo noise (PN) code modulation technique for making range resolved measurements of CO2 within the PBL using commercial, off-the-shelf, components. Conventional, range resolved, measurements require laser pulse widths that are s#rorter than the desired spatial resolution and have pulse spacing such that returns from only a single pulse are observed by the receiver at one time (for the PBL pulse separations must be greater than approximately 2000m). This imposes a serious limitation when using available fiber lasers because of the resulting low duty cycle (less than 0.001) and consequent low average laser output power. RZ PN code modulation enables a fiber laser to operate at much higher duty cycles (approaching 0.1) thereby more effectively utilizing the amplifier's output. This results in an increase in received counts by approximately two orders of magnitude. The approach involves employing two, back to back, CW fiber amplifiers seeded at the appropriate on and offline CO2 wavelengths (approximately 1572 nm) using distributed feedback diode lasers modulated by a PN code at rates significantly above 1 megahertz. An assessment of the technique, discussions of measurement precision and error sources as well as preliminary data will be presented.

  8. NMACA Approach Used to Build a Secure Message Authentication Code

    NASA Astrophysics Data System (ADS)

    Alosaimy, Raed; Alghathbar, Khaled; Hafez, Alaaeldin M.; Eldefrawy, Mohamed H.

    Secure storage systems should consider the integrity and authentication of long-term stored information. When information is transferred through communication channels, different types of digital information can be represented, such as documents, images, and database tables. The authenticity of such information must be verified, especially when it is transferred through communication channels. Authentication verification techniques are used to verify that the information in an archive is authentic and has not been intentionally or maliciously altered. In addition to detecting malicious attacks, verifying the integrity also identifies data corruption. The purpose of Message Authentication Code (MAC) is to authenticate messages, where MAC algorithms are keyed hash functions. In most cases, MAC techniques use iterated hash functions, and these techniques are called iterated MACs. Such techniques usually use a MAC key as an input to the compression function, and this key is involved in the compression function, f, at every stage. Modification detection codes (MDCs) are un-keyed hash functions, and are widely used by authentication techniques such as MD4, MD5, SHA-1, and RIPEMD-160. There have been new attacks on hash functions such as MD5 and SHA-1, which requires the introduction of more secure hash functions. In this paper, we introduce a new MAC methodology that uses an input MAC key in the compression function, to change the order of the message words and shifting operation in the compression function. The new methodology can be used in conjunction with a wide range of modification detection code techniques. Using the SHA-1 algorithm as a model, a new (SHA-1)-MAC algorithm is presented. The (SHA-1)-MAC algorithm uses the MAC key to build the hash functions by defining the order for accessing source words and defining the number of bit positions for circular left shifts.

  9. ICAN: A versatile code for predicting composite properties

    NASA Technical Reports Server (NTRS)

    Ginty, C. A.; Chamis, C. C.

    1986-01-01

    The Integrated Composites ANalyzer (ICAN), a stand-alone computer code, incorporates micromechanics equations and laminate theory to analyze/design multilayered fiber composite structures. Procedures for both the implementation of new data in ICAN and the selection of appropriate measured data are summarized for: (1) composite systems subject to severe thermal environments; (2) woven fabric/cloth composites; and (3) the selection of new composite systems including those made from high strain-to-fracture fibers. The comparisons demonstrate the versatility of ICAN as a reliable method for determining composite properties suitable for preliminary design.

  10. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  11. ELLIPT2D: A Flexible Finite Element Code Written Python

    SciTech Connect

    Pletzer, A.; Mollis, J.C.

    2001-03-22

    The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research.

  12. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    SciTech Connect

    Thuc Bui

    2007-12-06

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  13. Torus mapper: a code for dynamical models of galaxies

    NASA Astrophysics Data System (ADS)

    Binney, James; McMillan, Paul J.

    2016-02-01

    We present a freely downloadable software package for modelling the dynamics of galaxies, which we call the Torus Mapper (TM). The package is based around `torus mapping', which is a non-perturbative technique for creating orbital tori for specified values of the action integrals. Given an orbital torus and a star's position at a reference time, one can compute its position at any other time, no matter how remote. One can also compute the velocities with which the star will pass through any given point and the contribution it will make to the time-averaged density there. A system of angle-action coordinates for the given potential can be created by foliating phase space with orbital tori. Such a foliation is facilitated by the ability of TM to create tori by interpolating on a grid of tori. We summarize the advantages of using TM rather than a standard time-stepper to create orbits, and give segments of code that illustrate applications of TM in several contexts, including setting up initial conditions for an N-body simulation. We examine the precision of the orbital tori created by TM and the behaviour of the code when orbits become trapped by a resonance.

  14. A computer code for performance of spur gears

    NASA Technical Reports Server (NTRS)

    Wang, K. L.; Cheng, H. S.

    1983-01-01

    In spur gears both performance and failure predictions are known to be strongly dependent on the variation of load, lubricant film thickness, and total flash or contact temperature of the contacting point as it moves along the contact path. The need of an accurate tool for predicting these variables has prompted the development of a computer code based on recent findings in EHL and on finite element methods. The analyses and some typical results which to illustrate effects of gear geometry, velocity, load, lubricant viscosity, and surface convective heat transfer coefficient on the performance of spur gears are analyzed.

  15. Non-coding RNAs and disease: the classical ncRNAs make a comeback.

    PubMed

    de Almeida, Rogerio Alves; Fraczek, Marcin G; Parker, Steven; Delneri, Daniela; O'Keefe, Raymond T

    2016-08-15

    Many human diseases have been attributed to mutation in the protein coding regions of the human genome. The protein coding portion of the human genome, however, is very small compared with the non-coding portion of the genome. As such, there are a disproportionate number of diseases attributed to the coding compared with the non-coding portion of the genome. It is now clear that the non-coding portion of the genome produces many functional non-coding RNAs and these RNAs are slowly being linked to human diseases. Here we discuss examples where mutation in classical non-coding RNAs have been attributed to human disease and identify the future potential for the non-coding portion of the genome in disease biology. PMID:27528754

  16. Stacked codes: Universal fault-tolerant quantum computation in a two-dimensional layout

    NASA Astrophysics Data System (ADS)

    Jochym-O'Connor, Tomas; Bartlett, Stephen D.

    2016-02-01

    We introduce a class of three-dimensional color codes, which we call stacked codes, together with a fault-tolerant transformation that will map logical qubits encoded in two-dimensional (2D) color codes into stacked codes and back. The stacked code allows for the transversal implementation of a non-Clifford π /8 logical gate, which when combined with the logical Clifford gates that are transversal in the 2D color code give a gate set that is both fault-tolerant and universal without requiring nonstabilizer magic states. We then show that the layers forming the stacked code can be unfolded and arranged in a 2D layout. As only Clifford gates can be implemented transversally for 2D topological stabilizer codes, a nonlocal operation must be incorporated in order to allow for this transversal application of a non-Clifford gate. Our code achieves this operation through the transformation from a 2D color code to the unfolded stacked code induced by measuring only geometrically local stabilizers and gauge operators within the bulk of 2D color codes together with a nonlocal operator that has support on a one-dimensional boundary between such 2D codes. We believe that this proposed method to implement the nonlocal operation is a realistic one for 2D stabilizer layouts and would be beneficial in avoiding the large overheads caused by magic state distillation.

  17. The movement towards a more experimental approach to problem solving in mathematics using coding

    NASA Astrophysics Data System (ADS)

    Barichello, Leonardo

    2016-07-01

    Motivated by a problem proposed in a coding competition for secondary students, I will show on this paper how coding substantially changed the problem-solving process towards a more experimental approach.

  18. The Bilingual Personality as a Metasystem--The Case of Code Switching.

    ERIC Educational Resources Information Center

    Titone, Renzoo

    1987-01-01

    Suggests that code-switching is not only a neurolinguistic process but also a psychological phenomenon with substantial reference to personality structure and dynamics. The bilingual's personality as a "code-switcher" is tentatively defined. (Author/CB)

  19. Investigation of a panel code for airframe/propeller integration analyses

    NASA Technical Reports Server (NTRS)

    Miley, S. J.

    1982-01-01

    The Hess panel code was investigated as a procedure to predict the aerodynamic loading associated with propeller slipstream interference on the airframe. The slipstream was modeled as a variable onset flow to the lifting and nonlifting bodies treated by the code. Four sets of experimental data were used for comparisons with the code. The results indicate that the Hess code, in its present form, will give valid solutions for nonuniform onset flows which vary in direction only. The code presently gives incorrect solutions for flows with variations in velocity. Modifications to the code to correct this are discussed.

  20. Is a Genome a Codeword of an Error-Correcting Code?

    PubMed Central

    Kleinschmidt, João H.; Silva-Filho, Márcio C.; Bim, Edson; Herai, Roberto H.; Yamagishi, Michel E. B.; Palazzo, Reginaldo

    2012-01-01

    Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495

  1. A Parallel Numerical Micromagnetic Code Using FEniCS

    NASA Astrophysics Data System (ADS)

    Nagy, L.; Williams, W.; Mitchell, L.

    2013-12-01

    Many problems in the geosciences depend on understanding the ability of magnetic minerals to provide stable paleomagnetic recordings. Numerical micromagnetic modelling allows us to calculate the domain structures found in naturally occurring magnetic materials. However the computational cost rises exceedingly quickly with respect to the size and complexity of the geometries that we wish to model. This problem is compounded by the fact that the modern processor design no longer focuses on the speed at which calculations are performed, but rather on the number of computational units amongst which we may distribute our calculations. Consequently to better exploit modern computational resources our micromagnetic simulations must "go parallel". We present a parallel and scalable micromagnetics code written using FEniCS. FEniCS is a multinational collaboration involving several institutions (University of Cambridge, University of Chicago, The Simula Research Laboratory, etc.) that aims to provide a set of tools for writing scientific software; in particular software that employs the finite element method. The advantages of this approach are the leveraging of pre-existing projects from the world of scientific computing (PETSc, Trilinos, Metis/Parmetis, etc.) and exposing these so that researchers may pose problems in a manner closer to the mathematical language of their domain. Our code provides a scriptable interface (in Python) that allows users to not only run micromagnetic models in parallel, but also to perform pre/post processing of data.

  2. A model for non-monotonic intensity coding

    PubMed Central

    Nehrkorn, Johannes; Tanimoto, Hiromu; Herz, Andreas V. M.; Yarali, Ayse

    2015-01-01

    Peripheral neurons of most sensory systems increase their response with increasing stimulus intensity. Behavioural responses, however, can be specific to some intermediate intensity level whose particular value might be innate or associatively learned. Learning such a preference requires an adjustable trans- formation from a monotonic stimulus representation at the sensory periphery to a non-monotonic representation for the motor command. How do neural systems accomplish this task? We tackle this general question focusing on odour-intensity learning in the fruit fly, whose first- and second-order olfactory neurons show monotonic stimulus–response curves. Nevertheless, flies form associative memories specific to particular trained odour intensities. Thus, downstream of the first two olfactory processing layers, odour intensity must be re-coded to enable intensity-specific associative learning. We present a minimal, feed-forward, three-layer circuit, which implements the required transformation by combining excitation, inhibition, and, as a decisive third element, homeostatic plasticity. Key features of this circuit motif are consistent with the known architecture and physiology of the fly olfactory system, whereas alternative mechanisms are either not composed of simple, scalable building blocks or not compatible with physiological observations. The simplicity of the circuit and the robustness of its function under parameter changes make this computational motif an attractive candidate for tuneable non-monotonic intensity coding. PMID:26064666

  3. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  4. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  5. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of business... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or...

  6. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  7. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or required... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of...

  8. Composing Data Parallel Code for a SPARQL Graph Engine

    SciTech Connect

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste; Haglin, David J.; Feo, John

    2013-09-08

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basic graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.

  9. FARGO3D: A New GPU-oriented MHD Code

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Pablo; Masset, Frédéric S.

    2016-03-01

    We present the FARGO3D code, recently publicly released. It is a magnetohydrodynamics code developed with special emphasis on the physics of protoplanetary disks and planet-disk interactions, and parallelized with MPI. The hydrodynamics algorithms are based on finite-difference upwind, dimensionally split methods. The magnetohydrodynamics algorithms consist of the constrained transport method to preserve the divergence-free property of the magnetic field to machine accuracy, coupled to a method of characteristics for the evaluation of electromotive forces and Lorentz forces. Orbital advection is implemented, and an N-body solver is included to simulate planets or stars interacting with the gas. We present our implementation in detail and present a number of widely known tests for comparison purposes. One strength of FARGO3D is that it can run on either graphical processing units (GPUs) or central processing units (CPUs), achieving large speed-up with respect to CPU cores. We describe our implementation choices, which allow a user with no prior knowledge of GPU programming to develop new routines for CPUs, and have them translated automatically for GPUs.

  10. Performance of a parallel thermal-hydraulics code TEMPEST

    SciTech Connect

    Fann, G.I.; Trent, D.S.

    1996-11-01

    The authors describe the parallelization of the Tempest thermal-hydraulics code. The serial version of this code is used for production quality 3-D thermal-hydraulics simulations. Good speedup was obtained with a parallel diagonally preconditioned BiCGStab non-symmetric linear solver, using a spatial domain decomposition approach for the semi-iterative pressure-based and mass-conserved algorithm. The test case used here to illustrate the performance of the BiCGStab solver is a 3-D natural convection problem modeled using finite volume discretization in cylindrical coordinates. The BiCGStab solver replaced the LSOR-ADI method for solving the pressure equation in TEMPEST. BiCGStab also solves the coupled thermal energy equation. Scaling performance of 3 problem sizes (221220 nodes, 358120 nodes, and 701220 nodes) are presented. These problems were run on 2 different parallel machines: IBM-SP and SGI PowerChallenge. The largest problem attains a speedup of 68 on an 128 processor IBM-SP. In real terms, this is over 34 times faster than the fastest serial production time using the LSOR-ADI solver.

  11. A global fitting code for multichordal neutral beam spectroscopic data

    SciTech Connect

    Seraydarian, R.P.; Burrell, K.H.; Groebner, R.J.

    1992-05-01

    Knowledge of the heat deposition profile is crucial to all transport analysis of beam heated discharges. The heat deposition profile can be inferred from the fast ion birth profile which, in turn, is directly related to the loss of neutral atoms from the beam. This loss can be measured spectroscopically be the decrease in amplitude of spectral emissions from the beam as it penetrates the plasma. The spectra are complicated by the motional Stark effect which produces a manifold of nine bright peaks for each of the three beam energy components. A code has been written to analyze this kind of data. In the first phase of this work, spectra from tokamak shots are fit with a Stark splitting and Doppler shift model that ties together the geometry of several spatial positions when they are fit simultaneously. In the second phase, a relative position-to-position intensity calibration will be applied to these results to obtain the spectral amplitudes from which beam atom loss can be estimated. This paper reports on the computer code for the first phase. Sample fits to real tokamak spectral data are shown.

  12. A memristive spiking neuron with firing rate coding

    PubMed Central

    Ignatov, Marina; Ziegler, Martin; Hansen, Mirko; Petraru, Adrian; Kohlstedt, Hermann

    2015-01-01

    Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2) and on the chemical electromigration cell Ag/TiO2−x/Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926. PMID:26539074

  13. A memristive spiking neuron with firing rate coding.

    PubMed

    Ignatov, Marina; Ziegler, Martin; Hansen, Mirko; Petraru, Adrian; Kohlstedt, Hermann

    2015-01-01

    Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2) and on the chemical electromigration cell Ag/TiO2-x /Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926. PMID:26539074

  14. A Network Coding Based Routing Protocol for Underwater Sensor Networks

    PubMed Central

    Wu, Huayang; Chen, Min; Guan, Xin

    2012-01-01

    Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime. PMID:22666045

  15. A role for non-coding variation in schizophrenia

    PubMed Central

    Roussos, Panos; Mitchell, Amanda C.; Voloudakis, Georgios; Fullard, John F.; Pothula, Venu M.; Tsang, Jonathan; Stahl, Eli A.; Georgakopoulos, Anastasios; Ruderfer, Douglas M.; Charney, Alexander; Okada, Yukinori; Siminovitch, Katherine A.; Worthington, Jane; Padyukov, Leonid; Klareskog, Lars; Gregersen, Peter K.; Plenge, Robert M.; Raychaudhuri, Soumya; Fromer, Menachem; Purcell, Shaun M.; Brennand, Kristen J.; Robakis, Nikolaos K.; Schadt, Eric E.; Akbarian, Schahram; Sklar, Pamela

    2014-01-01

    SUMMARY A large portion of common variant loci associated with genetic risk for schizophrenia reside within non-coding sequence of unknown function. Here, we demonstrate promoter and enhancer enrichment in schizophrenia variants associated with expression quantitative trait loci (eQTL). The enrichment is greater when functional annotations derived from human brain are used relative to peripheral tissues. Regulatory trait concordance analysis ranked genes within schizophrenia genome-wide significant loci for a potential functional role, based on co-localization of a risk SNP, eQTL and regulatory element sequence. We identified potential physical interactions of non-contiguous proximal and distal regulatory elements. This was verified in prefrontal cortex and induced pluripotent stem cell-derived neurons for the L-type calcium channel (CACNA1C) risk locus. Our findings point to a functional link between schizophrenia-associated non-coding SNPs and 3-dimensional genome architecture associated with chromosomal loopings and transcriptional regulation in the brain. PMID:25453756

  16. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  17. System for loading executable code into volatile memory in a downhole tool

    DOEpatents

    Hall, David R.; Bartholomew, David B.; Johnson, Monte L.

    2007-09-25

    A system for loading an executable code into volatile memory in a downhole tool string component comprises a surface control unit comprising executable code. An integrated downhole network comprises data transmission elements in communication with the surface control unit and the volatile memory. The executable code, stored in the surface control unit, is not permanently stored in the downhole tool string component. In a preferred embodiment of the present invention, the downhole tool string component comprises boot memory. In another embodiment, the executable code is an operating system executable code. Preferably, the volatile memory comprises random access memory (RAM). A method for loading executable code to volatile memory in a downhole tool string component comprises sending the code from the surface control unit to a processor in the downhole tool string component over the network. A central processing unit writes the executable code in the volatile memory.

  18. CANTATAdb: A Collection of Plant Long Non-Coding RNAs.

    PubMed

    Szcześniak, Michał W; Rosikiewicz, Wojciech; Makałowska, Izabela

    2016-01-01

    Long non-coding RNAs (lncRNAs) represent a class of potent regulators of gene expression that are found in a wide array of eukaryotes; however, our knowledge about these molecules in plants is still very limited. In particular, a number of model plant species still lack comprehensive data sets of lncRNAs and their annotations, and very little is known about their biological roles. To meet these shortcomings, we created an online database of lncRNAs in 10 model plant species. The lncRNAs were identified computationally using dozens of publicly available RNA sequencing (RNA-Seq) libraries. Expression values, coding potential, sequence alignments as well as other types of data provide annotation for the identified lncRNAs. In order to better characterize them, we investigated their potential roles in splicing modulation and deregulation of microRNA functions. The data are freely available for searching, browsing and downloading from an online database called CANTATAdb (http://cantata.amu.edu.pl, http://yeti.amu.edu.pl/CANTATA/).

  19. Gene algebra from a genetic code algebraic structure.

    PubMed

    Sanchez, R; Morgado, E; Grau, R

    2005-10-01

    By considering two important factors involved in the codon-anticodon interactions, the hydrogen bond number and the chemical type of bases, a codon array of the genetic code table as an increasing code scale of interaction energies of amino acids in proteins was obtained. Next, in order to consecutively obtain all codons from the codon AAC, a sum operation has been introduced in the set of codons. The group obtained over the set of codons is isomorphic to the group (Z(64), +) of the integer module 64. On the Z(64)-algebra of the set of 64(N) codon sequences of length N, gene mutations are described by means of endomorphisms f:(Z(64))(N)-->(Z(64))(N). Endomorphisms and automorphisms helped us describe the gene mutation pathways. For instance, 77.7% mutations in 749 HIV protease gene sequences correspond to unique diagonal endomorphisms of the wild type strain HXB2. In particular, most of the reported mutations that confer drug resistance to the HIV protease gene correspond to diagonal automorphisms of the wild type. What is more, in the human beta-globin gene a similar situation appears where most of the single codon mutations correspond to automorphisms. Hence, in the analyses of molecular evolution process on the DNA sequence set of length N, the Z(64)-algebra will help us explain the quantitative relationships between genes.

  20. FRINK - A Code to Evaluate Space Reactor Transients

    SciTech Connect

    Poston, David I.; Marcille, Thomas F.; Dixon, David D.; Amiri, Benjamin W.

    2007-01-30

    One of the biggest needs for space reactor design and development is detailed system modeling. Most proposed space fission systems are very different from previously operated fission power systems, and extensive testing and modeling will be required to demonstrate integrated system performance. There are also some aspects of space reactors that make them unique from most terrestrial application, and require different modeling approaches. The Fission Reactor Integrated Nuclear Kinetics (FRINK) code was developed to evaluate simplified space reactor transients (note: the term ''space reactor'' inherently includes planetary and lunar surface reactors). FRINK is an integrated point kinetic/thermal-hydraulic transient analysis FORTRAN code - ''integrated'' refers to the simultaneous solution of the thermal and neutronic equations. In its current state FRINK is a very simple system model, perhaps better referred to as a reactor model. The ''system'' only extends to the primary loop power removal boundary condition; however this allows the simulation of simplified transients (e.g. loss of primary heat sink, loss of flow, large reactivity insertion, etc.), which are most important in bounding early system conceptual design. FRINK could then be added to a complete system model later in the design and development process as system design matures.

  1. CANTATAdb: A Collection of Plant Long Non-Coding RNAs

    PubMed Central

    Szcześniak, Michał W.; Rosikiewicz, Wojciech; Makałowska, Izabela

    2016-01-01

    Long non-coding RNAs (lncRNAs) represent a class of potent regulators of gene expression that are found in a wide array of eukaryotes; however, our knowledge about these molecules in plants is still very limited. In particular, a number of model plant species still lack comprehensive data sets of lncRNAs and their annotations, and very little is known about their biological roles. To meet these shortcomings, we created an online database of lncRNAs in 10 model plant species. The lncRNAs were identified computationally using dozens of publicly available RNA sequencing (RNA-Seq) libraries. Expression values, coding potential, sequence alignments as well as other types of data provide annotation for the identified lncRNAs. In order to better characterize them, we investigated their potential roles in splicing modulation and deregulation of microRNA functions. The data are freely available for searching, browsing and downloading from an online database called CANTATAdb (http://cantata.amu.edu.pl, http://yeti.amu.edu.pl/CANTATA/). PMID:26657895

  2. Verification and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code

    SciTech Connect

    Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A

    2004-12-09

    Verification and Validation (V&V) is a critical phase in the development cycle of any scientific code. The aim of the V&V process is to determine whether or not the code fulfills and complies with the requirements that were defined prior to the start of the development process. While code V&V can take many forms, this paper concentrates on validation of the results obtained from a modern code against those produced by a validated, legacy code. In particular, the neutron transport capabilities of the modern Monte Carlo code MERCURY are validated against those in the legacy Monte Carlo code TART. The results from each code are compared for a series of basic transport and criticality calculations which are designed to check a variety of code modules. These include the definition of the problem geometry, particle tracking, collisional kinematics, sampling of secondary particle distributions, and nuclear data. The metrics that form the basis for comparison of the codes include both integral quantities and particle spectra. The use of integral results, such as eigenvalues obtained from criticality calculations, is shown to be necessary, but not sufficient, for a comprehensive validation of the code. This process has uncovered problems in both the transport code and the nuclear data processing codes which have since been rectified.

  3. ROAR: A 3-D tethered rocket simulation code

    SciTech Connect

    York, A.R. II; Ludwigsen, J.S.

    1992-04-01

    A high-velocity impact testing technique, utilizing a tethered rocket, is being developed at Sandia National Laboratories. The technique involves tethering a rocket assembly to a pivot location and flying it in a semicircular trajectory to deliver the rocket and payload to an impact target location. Integral to developing this testing technique is the parallel development of accurate simulation models. An operational computer code, called ROAR (Rocket-on-a-Rope), has been developed to simulate the three-dimensional transient dynamic behavior of the tether and motor/payload assembly. This report presents a discussion of the parameters modeled, the governing set of equations, the through-time integration scheme, and the input required to set up a model. Also included is a sample problem and a comparison with experimental results.

  4. Development of depletion perturbation theory for a reactor nodal code

    SciTech Connect

    Bowman, S.M.

    1981-09-01

    A generalized depletion perturbation (DPT) theory formulation for light water reactor (LWR) depletion problems is developed and implemented into the three-dimensional LWR nodal code SIMULATE. This development applies the principles of the original derivation by M.L. Williams to the nodal equations solved by SIMULATE. The present formulation is first described in detail, and the nodal coupling methodology in SIMULATE is used to determine partial derivatives of the coupling coefficients. The modifications to the original code and the new DPT options available to the user are discussed. Finally, the accuracy and the applicability of the new DPT capability to LWR design analysis are examined for several LWR depletion test cases. The cases range from simple static cases to a realistic PWR model for an entire fuel cycle. Responses of interest included K/sub eff/, nodal peaking, and peak nodal exposure. The nonlinear behavior of responses with respect to perturbations of the various types of cross sections was also investigated. The time-dependence of the sensitivity coefficients for different responses was examined and compared. Comparison of DPT results for these examples to direct calculations reveals the limited applicability of depletion perturbation theory to LWR design calculations at the present. The reasons for these restrictions are discussed, and several methods which might improve the computational accuracy of DPT are proposed for future research.

  5. Chemotopic Odorant Coding in a Mammalian Olfactory System

    PubMed Central

    Johnson, Brett A.; Leon, Michael

    2008-01-01

    Systematic mapping studies involving 365 odorant chemicals have shown that glomerular responses in the rat olfactory bulb are organized spatially in patterns that are related to the chemistry of the odorant stimuli. This organization involves the spatial clustering of principal responses to numerous odorants that share key aspects of chemistry such as functional groups, hydrocarbon structural elements, and/or overall molecular properties related to water solubility. In several of the clusters, responses shift progressively in position according to odorant carbon chain length. These response domains appear to be constructed from orderly projections of sensory neurons in the olfactory epithelium and may also involve chromatography across the nasal mucosa. The spatial clustering of glomerular responses may serve to “tune” the principal responses of bulbar projection neurons by way of inhibitory interneuronal networks, allowing the projection neurons to respond to a narrower range of stimuli than their associated sensory neurons. When glomerular activity patterns are viewed relative to the overall level of glomerular activation, the patterns accurately predict the perception of odor quality, thereby supporting the notion that spatial patterns of activity are the key factors underlying that aspect of the olfactory code. A critical analysis suggests that alternative coding mechanisms for odor quality, such as those based on temporal patterns of responses, enjoy little experimental support. PMID:17480025

  6. Equilibrium and stability code for a diffuse plasma.

    PubMed

    Betancourt, O; Garabedian, P

    1976-04-01

    A computer code to investigate the equilibrium and stability of a diffuse plasma in three dimensions is described that generalizes earlier work on a sharp free boundary model. Toroidal equilibria of a plasma are determined by considering paths of steepest descent associated with a new version of the variational principle of magnetohydrodynamics that involves mapping a fixed coordinate domain onto the plasma. A discrete approximation of the potential energy is written down following the finite element method, and the resulting expression is minimized with respect to the values of the mapping at points of a rectangular grid. If a relative minimum of the discrete analogue of the energy is attained, the corresponding equilibrium is considered to be stable.

  7. Equilibrium and stability code for a diffuse plasma

    PubMed Central

    Betancourt, Octavio; Garabedian, Paul

    1976-01-01

    A computer code to investigate the equilibrium and stability of a diffuse plasma in three dimensions is described that generalizes earlier work on a sharp free boundary model. Toroidal equilibria of a plasma are determined by considering paths of steepest descent associated with a new version of the variational principle of magnetohydrodynamics that involves mapping a fixed coordinate domain onto the plasma. A discrete approximation of the potential energy is written down following the finite element method, and the resulting expression is minimized with respect to the values of the mapping at points of a rectangular grid. If a relative minimum of the discrete analogue of the energy is attained, the corresponding equilibrium is considered to be stable. PMID:16592310

  8. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  9. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 39 Postal Service 1 2014-07-01 2014-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  10. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 39 Postal Service 1 2012-07-01 2012-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  11. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 39 Postal Service 1 2013-07-01 2013-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  12. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  13. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  14. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 11 2011-10-01 2011-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  15. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  16. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  17. 50 CFR Table 3a to Part 680 - Crab Delivery Condition Codes

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Crab Delivery Condition Codes 3a Table 3a... ZONE OFF ALASKA Pt. 680, Table 3a Table 3a to Part 680—Crab Delivery Condition Codes Code Description 01 Whole crab, live. 79 Deadloss....

  18. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.

    1977-01-01

    An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.

  19. Reasoning with Computer Code: a new Mathematical Logic

    NASA Astrophysics Data System (ADS)

    Pissanetzky, Sergio

    2013-01-01

    A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self

  20. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  1. Implementation of Hadamard spectroscopy using MOEMS as a coded aperture

    NASA Astrophysics Data System (ADS)

    Vasile, T.; Damian, V.; Coltuc, D.; Garoi, F.; Udrea, C.

    2015-02-01

    Although nowadays spectrometers reached a high level of performance, output signals are often weak and traditional slit spectrometers still confronts the problem of poor optical throughput, minimizing their efficiency in low light setup conditions. In order to overcome these issues, Hadamard Spectroscopy (HS) was implemented in a conventional Ebert Fastie type of spectrometer setup, by substituting the exit slit with a digital micro-mirror device (DMD) who acts like a coded aperture. The theory behind HS and the functionality of the DMD are presented. The improvements brought using HS are enlightened by means of a spectrometric experiment and higher SNR spectrum is acquired. Comparative experiments were conducted in order to emphasize the SNR differences between HS and scanning slit method. Results provide a SNR gain of 3.35 favoring HS. One can conclude the HS method effectiveness to be a great asset for low light spectrometric experiments.

  2. Domain decomposition methods for a parallel Monte Carlo transport code

    SciTech Connect

    Alme, H J; Rodrigue, G H; Zimmerman, G B

    1999-01-27

    Achieving parallelism in simulations that use Monte Carlo transport methods presents interesting challenges. For problems that require domain decomposition, load balance can be harder to achieve. The Monte Carlo transport package may have to operate with other packages that have different optimal domain decompositions for a given problem. To examine some of these issues, we have developed a code that simulates the interaction of a laser with biological tissue; it uses a Monte Carlo method to simulate the laser and a finite element model to simulate the conduction of the temperature field in the tissue. We will present speedup and load balance results obtained for a suite of problems decomposed using a few domain decomposition algorithms we have developed.

  3. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  4. Nonbinary Quantum Convolutional Codes Derived from Negacyclic Codes

    NASA Astrophysics Data System (ADS)

    Chen, Jianzhang; Li, Jianping; Yang, Fan; Huang, Yuanyuan

    2015-01-01

    In this paper, some families of nonbinary quantum convolutional codes are constructed by using negacyclic codes. These nonbinary quantum convolutional codes are different from quantum convolutional codes in the literature. Moreover, we construct a family of optimal quantum convolutional codes.

  5. A Coding System for the Study of Linguistic Variation in Black English.

    ERIC Educational Resources Information Center

    Pfaff, Carol W.

    This paper documents a coding system developed to facilitate the investigation of linguistic variation in Black English. The rationale for employment of such a system is given. The use of the coding system in a study of child Black English is described and the codes for 41 phonological and syntactic variables investigated in the study are…

  6. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block

  7. Regulations and Ethical Considerations for Astronomy Education Research III: A Suggested Code of Ethics

    NASA Astrophysics Data System (ADS)

    Brogt, Erik; Foster, Tom; Dokter, Erin; Buxner, Sanlyn; Antonellis, Jessie

    We present an argument for, and suggested implementation of, a code of ethics for the astronomy education research community. This code of ethics is based on legal and ethical considerations set forth by U.S. federal regulations and the existing code of conduct of the American Educational Research Association. We also provide a fictitious research study as an example for working through the suggested code of ethics.

  8. A User's Guide to the PLTEMP/ANL Code

    SciTech Connect

    Olson, Arne P.; Kalimullah, M.

    2015-07-07

    PLTEMP/ANL V4.2 is a FORTRAN program that obtains a steady-state flow and temperature solution for a nuclear reactor core, or for a single fuel assembly. It is based on an evolutionary sequence of ''PLTEMP'' codes in use at ANL for the past 20 years. Fueled and non-fueled regions are modeled. Each fuel assembly consists of one or more plates or tubes separated by coolant channels. The fuel plates may have one to five layers of different materials, each with heat generation. The width of a fuel plate may be divided into multiple longitudinal stripes, each with its own axial power shape. The temperature solution is effectively 2-dimensional. It begins with a one-dimensional solution across all coolant channels and fuel plates/tubes within a given fuel assembly, at the entrance to the assembly. The temperature solution is repeated for each axial node along the length of the fuel assembly. The geometry may be either slab or radial, corresponding to fuel assemblies made of a series of flat (or slightly curved) plates, or of nested tubes. A variety of thermal-hydraulic correlations are available with which to determine safety margins such as Onset-of- Nucleate boiling (ONB), departure from nucleate boiling (DNB), and onset of flow instability (FI). Coolant properties for either light or heavy water are obtained from FORTRAN functions rather than from tables. The code is intended for thermal-hydraulic analysis of research reactor performance in the sub-cooled boiling regime. Both turbulent and laminar flow regimes can be modeled. Options to calculate both forced flow and natural circulation are available. A general search capability is available (Appendix XII) to greatly reduce the reactor analyst’s time.

  9. A user's guide to the PLTEMP/ANL code.

    SciTech Connect

    Kalimullah, M.

    2011-07-05

    PLTEMP/ANL V4.1 is a FORTRAN program that obtains a steady-state flow and temperature solution for a nuclear reactor core, or for a single fuel assembly. It is based on an evolutionary sequence of ''PLTEMP'' codes in use at ANL for the past 20 years. Fueled and non-fueled regions are modeled. Each fuel assembly consists of one or more plates or tubes separated by coolant channels. The fuel plates may have one to five layers of different materials, each with heat generation. The width of a fuel plate may be divided into multiple longitudinal stripes, each with its own axial power shape. The temperature solution is effectively 2-dimensional. It begins with a one-dimensional solution across all coolant channels and fuel plates/tubes within a given fuel assembly, at the entrance to the assembly. The temperature solution is repeated for each axial node along the length of the fuel assembly. The geometry may be either slab or radial, corresponding to fuel assemblies made of a series of flat (or slightly curved) plates, or of nested tubes. A variety of thermal-hydraulic correlations are available with which to determine safety margins such as Onset-of-Nucleate boiling (ONB), departure from nucleate boiling (DNB), and onset of flow instability (FI). Coolant properties for either light or heavy water are obtained from FORTRAN functions rather than from tables. The code is intended for thermal-hydraulic analysis of research reactor performance in the sub-cooled boiling regime. Both turbulent and laminar flow regimes can be modeled. Options to calculate both forced flow and natural circulation are available. A general search capability is available (Appendix XII) to greatly reduce the reactor analyst's time.

  10. Error threshold for the surface code in a superohmic environment

    NASA Astrophysics Data System (ADS)

    Lopez-Delgado, Daniel A.; Novais, E.; Mucciolo, Eduardo R.; Caldeira, Amir O.

    Using the Keldysh formalism, we study the fidelity of a quantum memory over multiple quantum error correction cycles when the physical qubits interact with a bosonic bath at zero temperature. For encoding, we employ the surface code, which has one of the highest error thresholds in the case of stochastic and uncorrelated errors. The time evolution of the fidelity of the resulting two-dimensional system is cast into a statistical mechanics phase transition problem on a three-dimensional spin lattice, and the error threshold is determined by the critical temperature of the spin model. For superohmic baths, we find that time does not affect the error threshold: its value is the same for one or an arbitrary number of quantum error correction cycles. Financial support Fapesp, and CNPq (Brazil).

  11. Developing a code of ethics for human cloning.

    PubMed

    Collmann, J; Graber, G

    2000-01-01

    Under what conditions might the cloning of human beings constitute an ethical practice? A tendency exists to analyze human cloning merely as a technical procedure. As with all revolutionary technological developments, however, human cloning potentially exists in a broad social context that will both shape and be shaped by the biological techniques. Although human cloning must be subjected to technical analysis that addresses fundamental ethical questions such as its safety and efficacy, questions exist that focus our attention on broader issues. Asserting that cloning inevitably leads to undesirable consequences commits the fallacy of technological determinism and untenably separates technological and ethical evaluation. Drawing from the Report of the National Bioethics Advisory Committee and Aldous Huxley's Brave New World, we offer a draft "Code of Ethics for Human Cloning" in order to stimulate discussion about the ethics of the broader ramifications of human cloning as well as its particular technological properties.

  12. A hippocampal network for spatial coding during immobility and sleep

    PubMed Central

    Kay, K.; Sosa, M.; Chung, J.E.; Karlsson, M.P.; Larkin, M.C.; Frank, L.M.

    2016-01-01

    How does an animal know where it is when it stops moving? Hippocampal place cells fire at discrete locations as subjects traverse space, thereby providing an explicit neural code for current location during locomotion. In contrast, during awake immobility, the hippocampus is thought to be dominated by neural firing representing past and possible future experience. The question of whether and how the hippocampus constructs a representation of current location in the absence of locomotion has stood unresolved. Here we report that a distinct population of hippocampal neurons, located in the CA2 subregion, signals current location during immobility, and furthermore does so in association with a previously unidentified hippocampus-wide network pattern. In addition, signaling of location persists into brief periods of desynchronization prevalent in slow-wave sleep. The hippocampus thus generates a distinct representation of current location during immobility, pointing to mnemonic processing specific to experience occurring in the absence of locomotion. PMID:26934224

  13. A user's manual for the Loaded Microstrip Antenna Code (LMAC)

    NASA Technical Reports Server (NTRS)

    Forrai, D. P.; Newman, E. H.

    1988-01-01

    The use of the Loaded Microstrip Antenna Code is described. The geometry of this antenna is shown and its dimensions are described in terms of the program outputs. The READ statements for the inputs are detailed and typical values are given where applicable. The inputs of four example problems are displayed with the corresponding output of the code given in the appendices.

  14. Plaspp: A New X-Ray Postprocessing Capability for ASCI Codes

    SciTech Connect

    Pollak, Gregory

    2003-09-01

    This report announces the availability of the beta version of a (partly) new code, Plaspp (Plasma Postprocessor). This code postprocesses (graphics) dumps from at least two ASCI code suites: Crestone Project and Shavano Project. The basic structure of the code follows that of TDG, the equivalent postprocessor code for LASNEX. In addition to some new commands, the basic differences between TDG and Plaspp are the following: Plaspp uses a graphics dump instead of the unique TDG dump, it handles the unstructured meshes that the ASCI codes produce, and it can use its own multigroup opacity data. Because of the dump format, this code should be useable by any code that produces Cartesian, cylindrical, or spherical graphics formats. This report details the new commands; the required information to be placed on the dumps; some new commands and edits that are applicable to TDG as well, but have not been documented elsewhere; and general information about execution on the open and secure networks.

  15. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    SciTech Connect

    Sullivan, T.M.

    1992-01-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed.

  16. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    SciTech Connect

    Sullivan, T.M.

    1992-04-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed.

  17. A Watermarking Scheme for High Efficiency Video Coding (HEVC)

    PubMed Central

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate. PMID:25144455

  18. The Tubulin Code: A Navigation System for Chromosomes during Mitosis.

    PubMed

    Barisic, Marin; Maiato, Helder

    2016-10-01

    Before chromosomes segregate during mitosis in metazoans, they align at the cell equator by a process known as chromosome congression. This is in part mediated by the coordinated activities of kinetochore motors with opposite directional preferences that transport peripheral chromosomes along distinct spindle microtubule populations. Because spindle microtubules are all made from the same α/β-tubulin heterodimers, a critical longstanding question has been how chromosomes are guided to specific locations during mitosis. This implies the existence of spatial cues/signals on specific spindle microtubules that are read by kinetochore motors on chromosomes and ultimately indicate the way towards the equator. Here, we discuss the emerging concept that tubulin post-translational modifications (PTMs), as part of the so-called tubulin code, work as a navigation system for kinetochore-based chromosome motility during early mitosis.

  19. A watermarking scheme for High Efficiency Video Coding (HEVC).

    PubMed

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate.

  20. Heparan sulfate proteoglycans: a sugar code for vertebrate development?

    PubMed

    Poulain, Fabienne E; Yost, H Joseph

    2015-10-15

    Heparan sulfate proteoglycans (HSPGs) have long been implicated in a wide range of cell-cell signaling and cell-matrix interactions, both in vitro and in vivo in invertebrate models. Although many of the genes that encode HSPG core proteins and the biosynthetic enzymes that generate and modify HSPG sugar chains have not yet been analyzed by genetics in vertebrates, recent studies have shown that HSPGs do indeed mediate a wide range of functions in early vertebrate development, for example during left-right patterning and in cardiovascular and neural development. Here, we provide a comprehensive overview of the various roles of HSPGs in these systems and explore the concept of an instructive heparan sulfate sugar code for modulating vertebrate development. PMID:26487777

  1. Visualization of elastic wavefields computed with a finite difference code

    SciTech Connect

    Larsen, S.; Harris, D.

    1994-11-15

    The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.

  2. Stability codes for a liquid rocket implemented for use on a PC

    NASA Technical Reports Server (NTRS)

    Armstrong, Wilbur; Doane, George C., III; Dean, Garvin

    1992-01-01

    The high frequency code has been made an interactive code using FORTRAN 5.0. The option to plot n-tau curves was added using the graphics routines of FORTRAN 5.0 and GRAFMATIC. The user is now able to run with input values non-dimensional (as in the original code) or dimensional. Input data may be modified from the keyboard. The low and intermediate frequency codes have been run through a set of variations. This will help the user to understand how the stability of a configuration will change if any of the input data changes.

  3. Analysis of a two-dimensional type 6 shock-interference pattern using a perfect-gas code and a real-gas code

    NASA Technical Reports Server (NTRS)

    Bertin, J. J.; Graumann, B. W.

    1973-01-01

    Numerical codes were developed to calculate the two dimensional flow field which results when supersonic flow encounters double wedge configurations whose angles are such that a type 4 pattern occurs. The flow field model included the shock interaction phenomena for a delta wing orbiter. Two numerical codes were developed, one which used the perfect gas relations and a second which incorporated a Mollier table to define equilibrium air properties. The two codes were used to generate theoretical surface pressure and heat transfer distributions for velocities from 3,821 feet per second to an entry condition of 25,000 feet per second.

  4. A Secure RFID Authentication Protocol Adopting Error Correction Code

    PubMed Central

    Zheng, Xinying; Chen, Pei-Yu

    2014-01-01

    RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance. PMID:24959619

  5. A benchmark study for glacial isostatic adjustment codes

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2011-04-01

    The study of glacial isostatic adjustment (GIA) is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to loading is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements (e.g. GRACE and GOCE) to the projections of future sea level trends in response to climate change. Modern modelling approaches to GIA are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated; a community benchmark data set would clearly be valuable. Following the example of the mantle convection community, here we present, for the first time, the results of a benchmark study of codes designed to model GIA. This has taken place within a collaboration facilitated through European Cooperation in Science and Technology (COST) Action ES0701. The approaches benchmarked are based on significantly different codes and different techniques. The test computations are based on models with spherical symmetry and Maxwell rheology and include inputs from different methods and solution techniques: viscoelastic normal modes, spectral-finite elements and finite elements. The tests involve the loading and tidal Love numbers and their relaxation spectra, the deformation and gravity variations driven by surface loads characterized by simple geometry and time history and the rotational fluctuations in response to glacial unloading. In spite of the significant differences in the numerical methods employed, the test computations show a satisfactory agreement between the results provided by the participants.

  6. A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.

    ERIC Educational Resources Information Center

    Bobka, Marilyn E.; Subramaniam, J.B.

    The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

  7. Code-Switching in English as a Foreign Language Classroom: Teachers' Attitudes

    ERIC Educational Resources Information Center

    Ibrahim, Engku Haliza Engku; Shah, Mohamed Ismail Ahamad; Armia, Najwa Tgk.

    2013-01-01

    Code-switching has always been an intriguing phenomenon to sociolinguists. While the general attitude to it seems negative, people seem to code-switch quite frequently. Teachers of English as a foreign language too frequently claim that they do not like to code-switch in the language classroom for various reasons--many are of the opinion that only…

  8. MUXS: a code to generate multigroup cross sections for sputtering calculations

    SciTech Connect

    Hoffman, T.J.; Robinson, M.T.; Dodds, H.L. Jr.

    1982-10-01

    This report documents MUXS, a computer code to generate multigroup cross sections for charged particle transport problems. Cross sections generated by MUXS can be used in many multigroup transport codes, with minor modifications to these codes, to calculate sputtering yields, reflection coefficients, penetration distances, etc.

  9. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  10. Assessment of MARMOT. A Mesoscale Fuel Performance Code

    SciTech Connect

    Tonks, M. R.; Schwen, D.; Zhang, Y.; Chakraborty, P.; Bai, X.; Fromm, B.; Yu, J.; Teague, M. C.; Andersson, D. A.

    2015-04-01

    MARMOT is the mesoscale fuel performance code under development as part of the US DOE Nuclear Energy Advanced Modeling and Simulation Program. In this report, we provide a high level summary of MARMOT, its capabilities, and its current state of validation. The purpose of MARMOT is to predict the coevolution of microstructure and material properties of nuclear fuel and cladding. It accomplished this using the phase field method coupled to solid mechanics and heat conduction. MARMOT is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE), and much of its basic capability in the areas of the phase field method, mechanics, and heat conduction come directly from MOOSE modules. However, additional capability specific to fuel and cladding is available in MARMOT. While some validation of MARMOT has been completed in the areas of fission gas behavior and grain growth, much more validation needs to be conducted. However, new mesoscale data needs to be obtained in order to complete this validation.

  11. Wind turbine design codes: A preliminary comparison of the aerodynamics

    SciTech Connect

    Buhl, M.L. Jr.; Wright, A.D.; Tangler, J.L.

    1997-12-01

    The National Wind Technology Center of the National Renewable Energy Laboratory is comparing several computer codes used to design and analyze wind turbines. The first part of this comparison is to determine how well the programs predict the aerodynamic behavior of turbines with no structural degrees of freedom. Without general agreement on the aerodynamics, it is futile to try to compare the structural response due to the aerodynamic input. In this paper, the authors compare the aerodynamic loads for three programs: Garrad Hassan`s BLADED, their own WT-PERF, and the University of Utah`s YawDyn. This report documents a work in progress and compares only two-bladed, downwind turbines.

  12. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    SciTech Connect

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident on an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.

  13. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  14. A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    Johnson, J.O.

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  15. A large scale code resolution service network in the Internet of Things.

    PubMed

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-11-07

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.

  16. A Large Scale Code Resolution Service Network in the Internet of Things

    PubMed Central

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  17. A large scale code resolution service network in the Internet of Things.

    PubMed

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  18. Assessement of Codes and Standards Applicable to a Hydrogen Production Plant Coupled to a Nuclear Reactor

    SciTech Connect

    M. J. Russell

    2006-06-01

    This is an assessment of codes and standards applicable to a hydrogen production plant to be coupled to a nuclear reactor. The result of the assessment is a list of codes and standards that are expected to be applicable to the plant during its design and construction.

  19. The Evolution of a Coding Schema in a Paced Program of Research

    ERIC Educational Resources Information Center

    Winters, Charlene A.; Cudney, Shirley; Sullivan, Therese

    2010-01-01

    A major task involved in the management, analysis, and integration of qualitative data is the development of a coding schema to facilitate the analytic process. Described in this paper is the evolution of a coding schema that was used in the analysis of qualitative data generated from online forums of middle-aged women with chronic conditions who…

  20. Is it Code Imperfection or 'garbage in Garbage Out'? Outline of Experiences from a Comprehensive Adr Code Verification

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2013-12-01

    ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and

  1. Homological stabilizer codes

    SciTech Connect

    Anderson, Jonas T.

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  2. Neutral Particle Transport in Cylindrical Plasma Simulated by a Monte Carlo Code

    NASA Astrophysics Data System (ADS)

    Yu, Deliang; Yan, Longwen; Zhong, Guangwu; Lu, Jie; Yi, Ping

    2007-04-01

    A Monte Carlo code (MCHGAS) has been developed to investigate the neutral particle transport. The code can calculate the radial profile and energy spectrum of neutral particles in cylindrical plasmas. The calculation time of the code is dramatically reduced when the Splitting and Roulette schemes are applied. The plasma model of an infinite cylinder is assumed in the code, which is very convenient in simulating neutral particle transports in small and middle-sized tokamaks. The design of the multi-channel neutral particle analyser (NPA) on HL-2A can be optimized by using this code.

  3. TACI: a code for interactive analysis of neutron data produced by a tissue equivalent proportional counter

    SciTech Connect

    Cummings, F.M.

    1984-06-01

    The TEPC analysis code (TACI) is a computer program designed to analyze pulse height data generated by a tissue equivalent proportional counter (TEPC). It is written in HP BASIC and is for use on an HP-87XM personal computer. The theory of TEPC analysis upon which this code is based is summarized.

  4. A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Davis, Paul Christopher

    1992-01-01

    A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.

  5. A thermal neutron source imager using coded apertures

    SciTech Connect

    Vanier, P.E.; Forman, L.; Selcow, E.C.

    1995-08-01

    To facilitate the process of re-entry vehicle on-site inspections, it would be useful to have an imaging technique which would allow the counting of deployed multiple nuclear warheads without significant disassembly of a missile`s structure. Since neutrons cannot easily be shielded without massive amounts of materials, they offer a means of imaging the separate sources inside a sealed vehicle. Thermal neutrons carry no detailed spectral information, so their detection should not be as intrusive as gamma ray imaging. A prototype device for imaging at close range with thermal neutrons has been constructed using an array of {sup 3}He position-sensitive gas proportional counters combined with a uniformly redundant coded aperture array. A sealed {sup 252}Cf source surrounded by a polyethylene moderator is used as a test source. By means of slit and pinhole experiments, count rates of image-forming neutrons (those which cast a shadow of a Cd aperture on the detector) are compared with the count rates for background neutrons. The resulting ratio, which limits the available image contrast, is measured as a function of distance from the source. The envelope of performance of the instrument is defined by the contrast ratio, the angular resolution, and the total count rate as a function of distance from the source. These factors will determine whether such an instrument could be practical as a tool for treaty verification.

  6. On the validation of a code and a turbulence model appropriate to circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.

    1988-01-01

    A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.

  7. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  8. Coding of Neuroinfectious Diseases.

    PubMed

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue. PMID:26633789

  9. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  10. Building energy codes as a tool for energy efficiency: Examining implementation in Kentucky

    NASA Astrophysics Data System (ADS)

    Zwicker, Brittany L.

    2011-12-01

    Kentucky adopted the 2009 IECC residential energy code in 2011 and is developing a plan for achieving 90 percent compliance with the code. This report examines recommendations for energy code implementation from various expert sources and then compares them to Kentucky's current and planned future procedures for energy code adoption, implementation, and enforcement. It seeks to answer the question: To what extent is Kentucky following expert recommendations as it moves toward adopting and planning for implementation and enforcement of the IECC 2009? The report concludes with recommendations to the Kentucky Board of Housing, Buildings, and Construction for increasing residential energy code compliance and suggestions for exploring increased utility investments in energy efficiency.

  11. Honor Codes and Other Contextual Influences on Academic Integrity: A Replication and Extension to Modified Honor Code Settings.

    ERIC Educational Resources Information Center

    McCabe, Donald L.; Trevino, Linda Klebe; Butterfield, Kenneth D.

    2002-01-01

    Investigated the influence of modified honor codes, an alternative to traditional codes that is gaining popularity on larger campuses. Also tested the model of student academic dishonesty previously suggested by McCabe and Trevino. Found that modified honor codes are associated with lower levels of student dishonesty and that the McCabe Trevino…

  12. Alternative knowledge acquisition: Developing a pulse coded neural network

    SciTech Connect

    Dress, W.B.

    1987-01-01

    After a Rip-van-Winkle nap of more than 20 years, the ideas of biologically motivated computing are re-emerging. Instrumental to this awakening have been the highly publicized contributions of John Hopfield and major advances in the neurosciences. In 1982, Hopfield showed how a system of maximally coupled neutron-like elements described by a Hamiltonian formalism (a linear, conservative system) could behave in a manner startlingly suggestive of the way humans might go about solving problems and retrieving memories. Continuing advances in the neurosciences are providing a coherent basis in suggesting how nature's neurons might function. A particular model is described for an artificial neural system designed to interact with (learn from and manipulate) a simulated (or real) environment. The model is based on early work by Iben Browning. The Browning model, designed to investigate computer-based intelligence, contains a particular simplification based on observations of frequency coding of information in the brain and information flow from receptors to the brain and back to effectors. The ability to act on and react to the environment was seen as an important principle, leading to self-organization of the system.

  13. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  14. Improving Residents' Code Status Discussion Skills: A Randomized Trial

    PubMed Central

    Neely, Kathy J.; Sharma, Rashmi K.; Cohen, Elaine R.; McGaghie, William C.; Wayne, Diane B.

    2012-01-01

    Abstract Background Inpatient Code Status Discussions (CSDs) are commonly facilitated by resident physicians, despite inadequate training. We studied the efficacy of a CSD communication skills training intervention for internal medicine residents. Methods This was a prospective, randomized controlled trial of a multimodality communication skills educational intervention for postgraduate year (PGY) 1 residents. Intervention group residents completed a 2 hour teaching session with deliberate practice of communication skills, online modules, self-reflection, and a booster training session in addition to assigned clinical rotations. Control group residents completed clinical rotations alone. CSD skills of residents in both groups were assessed 2 months after the intervention using an 18 item behavioral checklist during a standardized patient encounter. Average scores for intervention and control group residents were calculated and between-group differences on the CSD skills assessment were evaluated using two-tailed independent sample t tests. Results Intervention group residents displayed higher overall scores on the simulated CSD (75.1% versus 53.2%, p<0.0001) than control group residents. The intervention group also displayed a greater number of key CSD communication behaviors and facilitated significantly longer conversations. The training, evaluation, and feedback sessions were rated highly. Conclusion A focused, multimodality curriculum can improve resident performance of simulated CSDs. Skill improvement lasted for at least 2 months after the intervention. Further studies are needed to assess skill retention and to set minimum performance standards. PMID:22690890

  15. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  16. HELIOS: A new open-source radiative transfer code

    NASA Astrophysics Data System (ADS)

    Malik, Matej; Grosheintz, Luc; Lukas Grimm, Simon; Mendonça, João; Kitzmann, Daniel; Heng, Kevin

    2015-12-01

    I present the new open-source code HELIOS, developed to accurately describe radiative transfer in a wide variety of irradiated atmospheres. We employ a one-dimensional multi-wavelength two-stream approach with scattering. Written in Cuda C++, HELIOS uses the GPU’s potential of massive parallelization and is able to compute the TP-profile of an atmosphere in radiative equilibrium and the subsequent emission spectrum in a few minutes on a single computer (for 60 layers and 1000 wavelength bins).The required molecular opacities are obtained with the recently published code HELIOS-K [1], which calculates the line shapes from an input line list and resamples the numerous line-by-line data into a manageable k-distribution format. Based on simple equilibrium chemistry theory [2] we combine the k-distribution functions of the molecules H2O, CO2, CO & CH4 to generate a k-table, which we then employ in HELIOS.I present our results of the following: (i) Various numerical tests, e.g. isothermal vs. non-isothermal treatment of layers. (ii) Comparison of iteratively determined TP-profiles with their analytical parametric prescriptions [3] and of the corresponding spectra. (iii) Benchmarks of TP-profiles & spectra for various elemental abundances. (iv) Benchmarks of averaged TP-profiles & spectra for the exoplanets GJ1214b, HD189733b & HD209458b. (v) Comparison with secondary eclipse data for HD189733b, XO-1b & Corot-2b.HELIOS is being developed, together with the dynamical core THOR and the chemistry solver VULCAN, in the group of Kevin Heng at the University of Bern as part of the Exoclimes Simulation Platform (ESP) [4], which is an open-source project aimed to provide community tools to model exoplanetary atmospheres.-----------------------------[1] Grimm & Heng 2015, ArXiv, 1503.03806[2] Heng, Lyons & Tsai, Arxiv, 1506.05501Heng & Lyons, ArXiv, 1507.01944[3] e.g. Heng, Mendonca & Lee, 2014, ApJS, 215, 4H[4] exoclime.net

  17. Rewriting the epigenetic code for tumor resensitization: a review.

    PubMed

    Oronsky, Bryan; Oronsky, Neil; Scicinski, Jan; Fanger, Gary; Lybeck, Michelle; Reid, Tony

    2014-10-01

    In cancer chemotherapy, one axiom, which has practically solidified into dogma, is that acquired resistance to antitumor agents or regimens, nearly inevitable in all patients with metastatic disease, remains unalterable and irreversible, rendering therapeutic rechallenge futile. However, the introduction of epigenetic therapies, including histone deacetylase inhibitors (HDACis) and DNA methyltransferase inhibitors (DNMTIs), provides oncologists, like computer programmers, with new techniques to "overwrite" the modifiable software pattern of gene expression in tumors and challenge the "one and done" treatment prescription. Taking the epigenetic code-as-software analogy a step further, if chemoresistance is the product of multiple nongenetic alterations, which develop and accumulate over time in response to treatment, then the possibility to hack or tweak the operating system and fall back on a "system restore" or "undo" feature, like the arrow icon in the Windows XP toolbar, reconfiguring the tumor to its baseline nonresistant state, holds tremendous promise for turning advanced, metastatic cancer from a fatal disease into a chronic, livable condition. This review aims 1) to explore the potential mechanisms by which a group of small molecule agents including HDACis (entinostat and vorinostat), DNMTIs (decitabine and 5-azacytidine), and redox modulators (RRx-001) may reprogram the tumor microenvironment from a refractory to a nonrefractory state, 2) highlight some recent findings, and 3) discuss whether the current "once burned forever spurned" paradigm in the treatment of metastatic disease should be revised to promote active resensitization attempts with formerly failed chemotherapies.

  18. A neuronal learning rule for sub-millisecond temporal coding

    NASA Astrophysics Data System (ADS)

    Gerstner, Wulfram; Kempter, Richard; van Hemmen, J. Leo; Wagner, Hermann

    1996-09-01

    A PARADOX that exists in auditory and electrosensory neural systems1,2 is that they encode behaviourally relevant signals in the range of a few microseconds with neurons that are at least one order of magnitude slower. The importance of temporal coding in neural information processing is not clear yet3-8. A central question is whether neuronal firing can be more precise than the time constants of the neuronal processes involved9. Here we address this problem using the auditory system of the barn owl as an example. We present a modelling study based on computer simulations of a neuron in the laminar nucleus. Three observations explain the paradox. First, spiking of an 'integrate-and-fire' neuron driven by excitatory postsynaptic potentials with a width at half-maximum height of 250 μs, has an accuracy of 25 μs if the presynaptic signals arrive coherently. Second, the necessary degree of coherence in the signal arrival times can be attained during ontogenetic development by virtue of an unsupervised hebbian learning rule. Learning selects connections with matching delays from a broad distribution of axons with random delays. Third, the learning rule also selects the correct delays from two independent groups of inputs, for example, from the left and right ear.

  19. Rewriting the epigenetic code for tumor resensitization: a review.

    PubMed

    Oronsky, Bryan; Oronsky, Neil; Scicinski, Jan; Fanger, Gary; Lybeck, Michelle; Reid, Tony

    2014-10-01

    In cancer chemotherapy, one axiom, which has practically solidified into dogma, is that acquired resistance to antitumor agents or regimens, nearly inevitable in all patients with metastatic disease, remains unalterable and irreversible, rendering therapeutic rechallenge futile. However, the introduction of epigenetic therapies, including histone deacetylase inhibitors (HDACis) and DNA methyltransferase inhibitors (DNMTIs), provides oncologists, like computer programmers, with new techniques to "overwrite" the modifiable software pattern of gene expression in tumors and challenge the "one and done" treatment prescription. Taking the epigenetic code-as-software analogy a step further, if chemoresistance is the product of multiple nongenetic alterations, which develop and accumulate over time in response to treatment, then the possibility to hack or tweak the operating system and fall back on a "system restore" or "undo" feature, like the arrow icon in the Windows XP toolbar, reconfiguring the tumor to its baseline nonresistant state, holds tremendous promise for turning advanced, metastatic cancer from a fatal disease into a chronic, livable condition. This review aims 1) to explore the potential mechanisms by which a group of small molecule agents including HDACis (entinostat and vorinostat), DNMTIs (decitabine and 5-azacytidine), and redox modulators (RRx-001) may reprogram the tumor microenvironment from a refractory to a nonrefractory state, 2) highlight some recent findings, and 3) discuss whether the current "once burned forever spurned" paradigm in the treatment of metastatic disease should be revised to promote active resensitization attempts with formerly failed chemotherapies. PMID:25389457

  20. Rewriting the Epigenetic Code for Tumor Resensitization: A Review

    PubMed Central

    Oronsky, Bryan; Oronsky, Neil; Scicinski, Jan; Fanger, Gary; Lybeck, Michelle; Reid, Tony

    2014-01-01

    In cancer chemotherapy, one axiom, which has practically solidified into dogma, is that acquired resistance to antitumor agents or regimens, nearly inevitable in all patients with metastatic disease, remains unalterable and irreversible, rendering therapeutic rechallenge futile. However, the introduction of epigenetic therapies, including histone deacetylase inhibitors (HDACis) and DNA methyltransferase inhibitors (DNMTIs), provides oncologists, like computer programmers, with new techniques to “overwrite” the modifiable software pattern of gene expression in tumors and challenge the “one and done” treatment prescription. Taking the epigenetic code-as-software analogy a step further, if chemoresistance is the product of multiple nongenetic alterations, which develop and accumulate over time in response to treatment, then the possibility to hack or tweak the operating system and fall back on a “system restore” or “undo” feature, like the arrow icon in the Windows XP toolbar, reconfiguring the tumor to its baseline nonresistant state, holds tremendous promise for turning advanced, metastatic cancer from a fatal disease into a chronic, livable condition. This review aims 1) to explore the potential mechanisms by which a group of small molecule agents including HDACis (entinostat and vorinostat), DNMTIs (decitabine and 5-azacytidine), and redox modulators (RRx-001) may reprogram the tumor microenvironment from a refractory to a nonrefractory state, 2) highlight some recent findings, and 3) discuss whether the current “once burned forever spurned” paradigm in the treatment of metastatic disease should be revised to promote active resensitization attempts with formerly failed chemotherapies. PMID:25389457

  1. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  2. MITOM: a new unfolding code based on a spectra model method applied to neutron spectrometry.

    PubMed

    Tomás, M; Fernández, F; Bakali, M; Muller, H

    2004-01-01

    The MITOM code was developed at UAB (Universitat Autònoma de Barcelona) for unfolding neutron spectrometric measurements with a Bonner spheres system (BSS). One of the main characteristics of this code is that an initial parameterisation of the neutron energy components (thermal, intermediate and fast) is needed. This code uses the Monte Carlo method and the Bayesian theorem to obtain a set of solutions achieving different criteria and conditions between calculated and measured count rates. The final solution is an average of the acceptable solutions. The MITOM code was tested for ISO sources and a good agreement was observed between the reference values and the unfolded ones for global magnitudes. The code was applied recently to characterise both thermal SIGMA and CANEL/T400 sources of the IRSN facilities. The results of these applications were very satisfactory as well.

  3. Accuracy analysis on C/A code and P(Y) code pseudo-range of GPS dual frequency receiver and application in point positioning

    NASA Astrophysics Data System (ADS)

    Peng, Xiuying; Fan, Shijie; Guo, Jiming

    2008-10-01

    When the Anti-Spoofing (A-S) is active, the civilian users have some difficulties in using the P(Y) code for precise navigation and positioning. Z-tracking technique is one of the effective methods to acquire the P(Y) code. In this paper, the accuracy of pseudoranges from C/A code and P(Y) code for dual frequency GPS receiver is discussed. The principle of measuring the encrypted P(Y) code is described firstly, then a large data set from IGS tracking stations is utilized for analysis and verification with the help of a precise point positioning software developed by authors. Especially, P(Y) code pseudoranges of civilian GPS receivers allow eliminating/reducing the effect of ionospheric delay and improve the precision of positioning. The point positioning experiments for this are made in the end.

  4. A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler

    1998-10-01

    The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.

  5. A Flexible Point-Kernel Shielding Code System.

    1991-01-01

    Version 00 MARMER is a point-kernel shielding code which can be used to calculate the dose rate, energy absorption rate, energy flux or gamma-ray flux due to several sources at any point in a complex geometry. The geometry is described by the MARS geometry system which makes use of combinatorial geometry and an array repeating feature. Source spectra may be defined in several ways including an option to read a binary file containing nuclide concentrations,more » which has been calculated by ORIGEN-S. Therefore, MARMER makes use of a nuclide data library containing half life times, decay energies and gamma yields for over 1000 nuclides. To facilitate the use of ORIGEN-S in the VAX version, a preprocessor named PREORI is included for simple irradiation and decay problems. The spatial description of the source may be done in cartesian, cylindrical and spherical coordinates; and the source strength as a function of the distance along the coordinate axes may be done in many different ways. Several sources with different spectra may be treated simultaneously. As many calculational points as needed may be defined.« less

  6. GRMHD Simulations of Jet Formation with a New Code

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.

    2006-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic (GRMHD) code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated, constrained transport scheme is used to maintain a divergence-free magnetic field. Various one-dimensional test problems in both special and general relativity show significant improvements over our previous model. We have performed simulations of jet formations from a geometrically thin accretion disk near both nonrotating and rotating black holes. The new simulation results show that the jet is formed in the same manner as in previous work and propagates outward. In the rotating black hole cases, jets form much closer to the black hole's ergosphere and the magnetic field is strongly twisted due the frame-dragging effect. As the magnetic field strength becomes weaker, a larger amount of matter is launched with the jet. On the other hand, when the magnetic field strength becomes stronger, the jet has less matter and becomes poynting-flux dominated. We will also discuss how the jet properties depend on the rotation of a black hole.

  7. A Noise-Aware Coding Scheme for Texture Classification

    PubMed Central

    Shoyaib, Mohammad; Abdullah-Al-Wadud, M.; Chae, Oksam

    2011-01-01

    Texture-based analysis of images is a very common and much discussed issue in the fields of computer vision and image processing. Several methods have already been proposed to codify texture micro-patterns (texlets) in images. Most of these methods perform well when a given image is noise-free, but real world images contain different types of signal-independent as well as signal-dependent noises originated from different sources, even from the camera sensor itself. Hence, it is necessary to differentiate false textures appearing due to the noises, and thus, to achieve a reliable representation of texlets. In this proposal, we define an adaptive noise band (ANB) to approximate the amount of noise contamination around a pixel up to a certain extent. Based on this ANB, we generate reliable codes named noise tolerant ternary pattern (NTTP) to represent the texlets in an image. Extensive experiments on several datasets from renowned texture databases, such as the Outex and the Brodatz database, show that NTTP performs much better than the state-of-the-art methods. PMID:22164060

  8. HYDRA, A finite element computational fluid dynamics code: User manual

    SciTech Connect

    Christon, M.A.

    1995-06-01

    HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.

  9. A Software Upgrade of the NASA Aeroheating Code "MINIVER"

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce Mathew

    2013-01-01

    Computational Fluid Dynamics (CFD) is a powerful and versatile tool simulating fluid and thermal environments of launch and re-entry vehicles alike. Where it excels in power and accuracy, however, it lacks in speed. An alternative tool for this purpose is known as MINIVER, an aeroheating code widely used by NASA and within the aerospace industry. Capable of providing swift, reasonably accurate approximations of the fluid and thermal environment of launch vehicles, MINIVER is used where time is of the essence and accuracy need not be exact. However, MINIVER is an old, aging tool: running on a user-unfriendly, legacy command-line interface, it is difficult for it to keep pace with more modem software tools. Florida Institute of Technology was tasked with the construction of a new Graphical User Interface (GUI) that implemented the legacy version's capabilities and enhanced them with new tools and utilities. This thesis provides background to the legacy version of the program, the progression and final version of a modem user interface, and benchmarks to demonstrate its usefulness.

  10. Fast-coding robust motion estimation model in a GPU

    NASA Astrophysics Data System (ADS)

    García, Carlos; Botella, Guillermo; de Sande, Francisco; Prieto-Matias, Manuel

    2015-02-01

    Nowadays vision systems are used with countless purposes. Moreover, the motion estimation is a discipline that allow to extract relevant information as pattern segmentation, 3D structure or tracking objects. However, the real-time requirements in most applications has limited its consolidation, considering the adoption of high performance systems to meet response times. With the emergence of so-called highly parallel devices known as accelerators this gap has narrowed. Two extreme endpoints in the spectrum of most common accelerators are Field Programmable Gate Array (FPGA) and Graphics Processing Systems (GPU), which usually offer higher performance rates than general propose processors. Moreover, the use of GPUs as accelerators involves the efficient exploitation of any parallelism in the target application. This task is not easy because performance rates are affected by many aspects that programmers should overcome. In this paper, we evaluate OpenACC standard, a programming model with directives which favors porting any code to a GPU in the context of motion estimation application. The results confirm that this programming paradigm is suitable for this image processing applications achieving a very satisfactory acceleration in convolution based problems as in the well-known Lucas & Kanade method.

  11. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-01

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions.

  12. A mechanistic code for intact and defective nuclear fuel element performance

    NASA Astrophysics Data System (ADS)

    Shaheen, Khaled

    During reactor operation, nuclear fuel elements experience an environment featuring high radiation, temperature, and pressure. Predicting in-reactor performance of nuclear fuel elements constitutes a complex multi-physics problem, one that requires numerical codes to be solved. Fuel element performance codes have been developed for different reactor and fuel designs. Most of these codes simulate fuel elements using one-or quasi-two-dimensional geometries, and some codes are only applicable to steady state but not transient behaviour and vice versa. Moreover, while many conceptual and empirical separate-effects models exist for defective fuel behaviour, wherein the sheath is breached allowing coolant ingress and fission gas escape, there have been few attempts to predict defective fuel behaviour in the context of a mechanistic fuel performance code. Therefore, a mechanistic fuel performance code, called FORCE (Fuel Operational peRformance Computations in an Element) is proposed for the time-dependent behaviour of intact and defective CANDU nuclear fuel elements. The code, which is implemented in the COMSOL Multiphysics commercial software package, simulates the fuel, sheath, and fuel-to-sheath gap in a radial-axial geometry. For intact fuel performance, the code couples models for heat transport, fission gas production and diffusion, and structural deformation of the fuel and sheath. The code is extended to defective fuel performance by integrating an adapted version of a previously developed fuel oxidation model, and a model for the release of radioactive fission product gases from the fuel to the coolant. The FORCE code has been verified against the ELESTRES-IST and ELESIM industrial code for its predictions of intact fuel performance. For defective fuel behaviour, the code has been validated against coulometric titration data for oxygen-to-metal ratio in defective fuel elements from commercial reactors, while also being compared to a conceptual oxidation model

  13. [The evolution of the Italian Code of Medical Deontology: a historical-epistemological perspective].

    PubMed

    Conti, A A

    2014-01-01

    The Italian Code of Medical Deontology is a set of self-discipline rules prefixed by the medical profession, that are mandatory for the members of the medical registers, who must conform to these rules. The history of the Italian Code of Medical Deontology dates back to the beginning of the twentieth century. In 1903 it appeared in the form of a "Code of Ethics and Deontology" and was prepared by the Board of the Medical Register of Sassari (Sardinia). This Board inserted the principles inspiring the correct practice of the medical profession in an articulated and self-normative system, also foreseeing disciplinary measures. About ten years later, in 1912, the Medical Register of Turin (Piedmont) elaborated a Code which constituted the basis for a subsequent elaboration leading to a Unified Code of Medical Ethics (1924). After World War II the idea prevailed in Italy that the codes of medical deontology should undergo periodical review, updating and dissemination, and the new 1947 text (Turin) was for the first time amply diffused among Italian physicians. The next national code dates back to 1958, and twenty years later a revision was published. In the 1989 Code new topics appeared, including organ transplantation, artificial in vitro insemination and the role of police doctors; these and other issues were later developed in the 1995, 1998 and 2006 versions of the Code. The last available edition of the Italian Code of Medical Deontology is that of May 2014. PMID:25524190

  14. COBRA-SFS: A thermal-hydraulic analysis code for spent fuel storage and transportation casks

    SciTech Connect

    Michener, T.E.; Rector, D.R.; Cuta, J.M.; Dodge, R.E.; Enderlin, C.W.

    1995-09-01

    COBRA-SFS is a general thermal-hydraulic analysis computer code for prediction of material temperatures and fluid conditions in a wide variety of systems. The code has been validated for analysis of spent fuel storage systems, as part of the Commercial Spent Fuel Management Program of the US Department of Energy. The code solves finite volume equations representing the conservation equations for mass, moment, and energy for an incompressible single-phase heat transfer fluid. The fluid solution is coupled to a finite volume solution of the conduction equation in the solid structure of the system. This document presents a complete description of Cycle 2 of COBRA-SFS, and consists of three main parts. Part 1 describes the conservation equations, constitutive models, and solution methods used in the code. Part 2 presents the User Manual, with guidance on code applications, and complete input instructions. This part also includes a detailed description of the auxiliary code RADGEN, used to generate grey body view factors required as input for radiative heat transfer modeling in the code. Part 3 describes the code structure, platform dependent coding, and program hierarchy. Installation instructions are also given for the various platform versions of the code that are available.

  15. Cars Thermometry in a Supersonic Combustor for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; Danehy, P. M.; Springer, R. R.; DeLoach, R.; Capriotti, D. P.

    2002-01-01

    An experiment has been conducted to acquire data for the validation of computational fluid dynamics (CFD) codes used in the design of supersonic combustors. The primary measurement technique is coherent anti-Stokes Raman spectroscopy (CARS), although surface pressures and temperatures have also been acquired. Modern- design- of-experiment techniques have been used to maximize the quality of the data set (for the given level of effort) and minimize systematic errors. The combustor consists of a diverging duct with single downstream- angled wall injector. Nominal entrance Mach number is 2 and enthalpy nominally corresponds to Mach 7 flight. Temperature maps are obtained at several planes in the flow for two cases: in one case the combustor is piloted by injecting fuel upstream of the main injector, the second is not. Boundary conditions and uncertainties are adequately characterized. Accurate CFD calculation of the flow will ultimately require accurate modeling of the chemical kinetics and turbulence-chemistry interactions as well as accurate modeling of the turbulent mixing

  16. Understanding the Code: acting in a patient's best interests.

    PubMed

    Griffith, Richard

    2015-09-01

    The revised Code of the Nursing and Midwifery Council (NMC), the statutory professional regulator for registered district nurses, makes clear that while district nurses can interpret the values and principles for use in community settings, the standards are not negotiable or discretionary. They must be applied or the district nurse's fitness to practice will be called into question. In this article in the continuing series analysing the legal implications of the Code on district nurse practice, the author considers the fourth standard that requires district nurses to act in the best interests of people at all times. PMID:26322994

  17. Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liu, Nan-Suey

    2005-01-01

    The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.

  18. A supersonic three-dimensional code for flow over blunt bodies: Program documentation and test cases

    NASA Technical Reports Server (NTRS)

    Chaussee, D. S.; Mcmillan, O. J.

    1980-01-01

    The use of a computer code for the calculation of steady, supersonic, three dimensional, inviscid flow over blunt bodies is illustrated. Input and output are given and explained for two cases: a pointed code of 20 deg half angle at 15 deg angle of attack in a free stream with M sub infinite = 7, and a cone-ogive-cylinder at 10 deg angle of attack with M sub infinite = 2.86. A source listing of the computer code is provided.

  19. A dual-sided coded-aperture radiation detection system

    NASA Astrophysics Data System (ADS)

    Penny, R. D.; Hood, W. E.; Polichar, R. M.; Cardone, F. H.; Chavez, L. G.; Grubbs, S. G.; Huntley, B. P.; Kuharski, R. A.; Shyffer, R. T.; Fabris, L.; Ziock, K. P.; Labov, S. E.; Nelson, K.

    2011-10-01

    We report the development of a large-area, mobile, coded-aperture radiation imaging system for localizing compact radioactive sources in three dimensions while rejecting distributed background. The 3D Stand-Off Radiation Detection System (SORDS-3D) has been tested at speeds up to 95 km/h and has detected and located sources in the millicurie range at distances of over 100 m. Radiation data are imaged to a geospatially mapped world grid with a nominal 1.25- to 2.5-m pixel pitch at distances out to 120 m on either side of the platform. Source elevation is also extracted. Imaged radiation alarms are superimposed on a side-facing video log that can be played back for direct localization of sources in buildings in urban environments. The system utilizes a 37-element array of 5×5×50 cm 3 cesium-iodide (sodium) detectors. Scintillation light is collected by a pair of photomultiplier tubes placed at either end of each detector, with the detectors achieving an energy resolution of 6.15% FWHM (662 keV) and a position resolution along their length of 5 cm FWHM. The imaging system generates a dual-sided two-dimensional image allowing users to efficiently survey a large area. Imaged radiation data and raw spectra are forwarded to the RadioNuclide Analysis Kit (RNAK), developed by our collaborators, for isotope ID. An intuitive real-time display aids users in performing searches. Detector calibration is dynamically maintained by monitoring the potassium-40 peak and digitally adjusting individual detector gains. We have recently realized improvements, both in isotope identification and in distinguishing compact sources from background, through the installation of optimal-filter reconstruction kernels.

  20. Direct multi-scale coupling of a transport code to gyrokinetic turbulence codes, with comparisons to tokamak experiments

    NASA Astrophysics Data System (ADS)

    Barnes, Michael

    2009-11-01

    To faithfully simulate ITER and other modern fusion devices, one must resolve electron and ion fluctuation scales in a five-dimensional phase space and time. Simultaneously, one must account for the interaction of this turbulence with the slow evolution of the large-scale plasma profiles. Because of the enormous range of scales involved and the high dimensionality of the problem, resolved first-principles simulations of the full core volume over the confinement time are very challenging using conventional (brute force) techniques. In order to address this problem, we have developed a new approach in which turbulence calculations from multiple gyrokinetic flux tube simulations are coupled together using gyrokinetic transport equations to obtain self-consistent equilibrium profiles and corresponding turbulent fluxes. This multi-scale approach is embodied in a new code, Trinity, which is capable of evolving equilibrium profiles for multiple species, including electromagnetic effects and realistic magnetic geometry, at a fraction of the cost of conventional direct numerical simulations. Key components in the cost reduction are the extreme parallelism enabled by the use of coupled flux tubes and the use of a nonlinear implicit algorithm to take large time steps when evolving the equilibrium. In this talk, we describe the multi-scale model employed in Trinity and present simulation results using nonlinear fluxes calculated with the gyrokinetic turbulence codes GS2 and GENE. We compare the numerical predictions from Trinity simulations with experimental results from a number of fusion devices, including JET and MAST.

  1. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  2. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 1A: user's manual - theoretical background and numerical equations. Computer code manual. [PWR; BWR

    SciTech Connect

    Hofmann, R.

    1981-11-01

    A useful computer simulation method based on the explicit finite difference technique can be used to address transient dynamic situations associated with nuclear reactor design and analysis. This volume is divided into two parts. Part A contains the theoretical background (physical and numerical) and the numerical equations for the STEALTH 1D, 2D, and 3D computer codes. Part B contains input instructions for all three codes. The STEALTH codes are based entirely on the published technology of the Lawrence Livermore National Laboratory, Livermore, California, and Sandia National Laboratories, Albuquerque, New Mexico.

  3. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  4. Polish Code of Ethics of a Medical Laboratory Specialist

    PubMed Central

    2014-01-01

    Along with the development of medicine, increasingly significant role has been played by the laboratory diagnostics. For over ten years the profession of the medical laboratory specialist has been regarded in Poland as the autonomous medical profession and has enjoyed a status of one of public trust. The process of education of medical laboratory specialists consists of a five-year degree in laboratory medicine, offered at Medical Universities, and of a five-year Vocational Specialization in one of the fields of laboratory medicine such as clinical biochemistry, medical microbiology, medical laboratory toxicology, medical laboratory cytomorphology and medical laboratory transfusiology. An important component of medical laboratory specialists’ identity is awareness of inherited ethos obtained from bygone generations of workers in this particular profession and the need to continue its further development. An expression of this awareness is among others Polish Code of Ethics of a Medical Laboratory Specialist (CEMLS) containing a set of values and a moral standpoint characteristic of this type of professional environment. Presenting the ethos of the medical laboratory specialist is a purpose of this article. Authors focus on the role CEMLS plays in areas of professional ethics and law. Next, they reconstruct the Polish model of ethos of medical diagnostic laboratory personnel. An overall picture consists of a presentation of the general moral principles concerning execution of this profession and rules of conduct in relations with the patient, own professional environment and the rest of the society. Polish model of ethical conduct, which is rooted in Hippocratic medical tradition, harmonizes with the ethos of medical laboratory specialists of other European countries and the world.

  5. Performance analysis of a LDPC coded OFDM communication system in shallow water acoustic channels

    NASA Astrophysics Data System (ADS)

    Liu, Shengxing; Xu, Xiaomei

    2012-11-01

    Time-varying significant multipath interference is the major obstacle to reliable data communication in shallow water acoustic channels. In this paper, the performance of a low density parity check (LDPC) coded orthogonal frequency division multiplexing (OFDM) communication system is investigated for these channels. The initial message for LDPC, decoded by using the belief propagation (BP) algorithm, is deduced for OFDM underwater acoustic channels; based on this deduction, the noise thresholds of regular LDPC codes with different code rates are obtained by using the density evolution algorithm. Furthermore, a communication system model, developed with LDPC code, OFDM and channel interleaver for shallow water acoustic channels, is introduced. The effect of modulation and coding schemes on the LDPC codes performance is investigated by simulation. The results show that the system can achieve remarkable performance in shallow water acoustic channels, and the performance improves with increasing code length and decreasing code rate. The bit error rate (BER) of the system, under conditions with QPSK modulation, 1280-code length and 1/2-code rate, is less than 10-5 when the signal to noise ratio (SNR) is greater than 6.8dB. These values are obtained for a five-path shallow water acoustic channel of Xiamen harbor.

  6. Adding Drift Kinetics to a Global MHD Code

    NASA Astrophysics Data System (ADS)

    Lyon, J.; Merkin, V. G.; Zhang, B.; Ouellette, J.

    2015-12-01

    Global MHD models have generally been successful in describing thebehavior of the magnetosphere at large and meso-scales. An exceptionis the inner magnetosphere where energy dependent particle drifts areessential in the dynamics and evolution of the ring current. Even inthe tail particle drifts are a significant perturbation on the MHDbehavior of the plasma. The most common drift addition to MHD has beeninclusion of the Hall term in Faraday's Law. There have been attemptsin the space physics context to include gradient and curvature driftswithin a single fluid MHD picture. These have not been terriblysuccessful because the use of a single, Maxwellian distribution doesnot capture the energy dependent nature of the drifts. The advent ofmulti-fluid MHD codes leads to a reconsideration of this problem. TheVlasov equation can be used to define individual ``species'' whichcover a specific energy range. Each fluid can then be treated ashaving a separate evolution. We take the approach of the RiceConvection Model (RCM) that each energy channel can be described by adistribution that is essentially isotropic in the guiding centerpicture. In the local picture, this gives rise to drifts that can bedescribed in terms of the energy dependent inertial and diamagneticdrifts. By extending the MHD equations with these drifts we can get asystem which reduces to the RCM approach in the slow-flow innermagnetosphere but is not restricted to cases where the flow speed issmall. The restriction is that the equations can be expanded in theratio of the Larmor radius to the gradient scale lengths. At scalesapproaching di, the assumption of gyrotropic (or isotropic)distributions break down. In addition to the drifts, the formalism canalso be used to include finite Larmor radius effects on the pressuretensor (gyro-viscosity). We present some initial calculations with this method.

  7. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  8. gevolution: a cosmological N-body code based on General Relativity

    NASA Astrophysics Data System (ADS)

    Adamek, Julian; Daverio, David; Durrer, Ruth; Kunz, Martin

    2016-07-01

    We present a new N-body code, gevolution, for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES. We then proceed with a simulation of large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.

  9. Codes for a priority queue on a parallel data bus. [Deep Space Network

    NASA Technical Reports Server (NTRS)

    Wallis, D. E.; Taylor, H.

    1979-01-01

    Some codes for arbitration of priorities among subsystem computers or peripheral device controllers connected to a parallel data bus are described. At arbitration time, several subsystems present wire-OR, parallel code words to the bus, and the central computer can identify the subsystem of highest priority and determine which of two or more transmission services the subsystem requires. A mathematical discussion of the optimality of the codes with regard to the number of subsystems that may participate in the scheme for a given number of wires is presented along with the number of services that each subsystem may request.

  10. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  11. Developing a Working Code of Ethics for Human Resource Personnel.

    ERIC Educational Resources Information Center

    Rampal, Kuldip R.

    1991-01-01

    To develop codes of ethics for their profession, college human resources personnel must first understand their primary job-related responsibilities. These include being alert to evolving organizational needs; coordinating needed training of employees; appreciating the nuances of psychology, communication, and motivation; and observing employee…

  12. A Normative Code of Conduct for Admissions Officers

    ERIC Educational Resources Information Center

    Hodum, Robert L.

    2012-01-01

    The increasing competition for the desired quantity and quality of college students, along with the rise of for-profit institutions, has amplified the scrutiny of behavior and ethics among college admissions professionals and has increased the need for meaningful ethical guidelines and codes of conduct. Many other areas of responsibility within…

  13. ABAREX: A neutron spherical optical-statistical model code

    SciTech Connect

    Lawson, R.D.

    1992-06-01

    The spherical optical-statistical model is briefly reviewed and the capabilities of the neutron scattering code, ABAREX, are presented. Input files for ten examples, in which neutrons are scattered by various nuclei, are given and the output of each run is discussed in detail.

  14. Pupils, Resistance and Gender Codes: A Study of Classroom Encounters.

    ERIC Educational Resources Information Center

    Riddell, Sheila

    1989-01-01

    Using classroom observation in two rural English comprehensive schools, demonstrates that both teachers' coping strategies and students' opposition to education reinforce traditional gender codes by drawing on exaggerated notions of masculinity and femininity. Student resistance to teacher authority through sex role manipulation reproduces, rather…

  15. Nurses' codes of ethics in practice and education: a review of the literature.

    PubMed

    Numminen, Olivia; van der Arend, Arie; Leino-Kilpi, Helena

    2009-06-01

    The purpose of this review was to provide an overview of the empirical literature on nurses' codes of ethics in practice and education covering the time from 1980 to August 2007. The focus was on methodological issues, main domains of interest and findings of the studies. The aim of the review was to identify knowledge gaps and to provide recommendations for further research. Research on the codes of ethics in nursing is scarce. The main domains of interest were education, nurses' knowledge and use of the codes, the content and functions of the codes, and moral behaviour and values related to the codes. Education of the codes was important, and it had a positive impact on students' moral behaviour measured by an instrument based on the codes. Nurses' knowledge and use of the codes was deficient. Nurses' practice was guided by environmental contexts and personal experiences rather than the codes. However, nurses' values espoused those of the codes. The nurse-patient relationship was the best known aspect of the codes. Methodological diversity, a small number of studies focusing on several domains of interest warrants care in the interpretation of the findings. Further research should focus particularly on the education of the codes, covering the realization of the teaching process, evaluation of outcomes and organization of education. Cooperation between theoretical education and clinical practice should be explored. Research of the meaning of the codes and their functions for nurses, nurses' moral behaviour and professional values is needed. Research should cover all levels and areas of nursing and reach beyond the nurse-patient relationship to relationships with colleagues, other health professions, organizations and the society. The use of more varied methodological approaches is suggested. PMID:19077064

  16. Verification & Validation Toolkit to Assess Codes: Is it Theory Limitation, Numerical Method Inadequacy, Bug in the Code or a Serious Flaw?

    NASA Astrophysics Data System (ADS)

    Bombardelli, F. A.; Zamani, K.

    2014-12-01

    We introduce and discuss an open-source, user friendly, numerical post-processing piece of software to assess reliability of the modeling results of environmental fluid mechanics' codes. Verification and Validation, Uncertainty Quantification (VAVUQ) is a toolkit developed in Matlab© for general V&V proposes. In this work, The VAVUQ implementation of V&V techniques and user interfaces would be discussed. VAVUQ is able to read Excel, Matlab, ASCII, and binary files and it produces a log of the results in txt format. Next, each capability of the code is discussed through an example: The first example is the code verification of a sediment transport code, developed with the Finite Volume Method, with MES. Second example is a solution verification of a code for groundwater flow, developed with the Boundary Element Method, via MES. Third example is a solution verification of a mixed order, Compact Difference Method code of heat transfer via MMS. Fourth example is a solution verification of a 2-D, Finite Difference Method code of floodplain analysis via Complete Richardson Extrapolation. In turn, application of VAVUQ in quantitative model skill assessment studies (validation) of environmental codes is given through two examples: validation of a two-phase flow computational modeling of air entrainment in a free surface flow versus lab measurements and heat transfer modeling in the earth surface versus field measurement. At the end, we discuss practical considerations and common pitfalls in interpretation of V&V results.

  17. ACFAC: a cash flow analysis code for estimating product price from an industrial operation

    SciTech Connect

    Delene, J.G.

    1980-04-01

    A computer code is presented which uses a discountted cash flow methodology to obtain an average product price for an industtrial process. The general discounted cash flow method is discussed. Special code options include multiple treatments of interest during construction and other preoperational costs, investment tax credits, and different methods for tax depreciation of capital assets. Two options for allocating the cost of plant decommissioning are available. The FORTRAN code listing and the computer output for a sample problem are included.

  18. A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy David; Krolik, Julian H.

    2013-01-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  19. 50 CFR Table 14a to Part 679 - Port of Landing Codes 1, Alaska

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Port of Landing Codes 1, Alaska 14a Table... ALASKA Pt. 679, Table 14a Table 14a to Part 679—Port of Landing Codes 1, Alaska Port Name NMFS Code ADF&G... Inlet 124 XIP False Pass 125 FSP Fairbanks 305 FBK Galena 306 GAL Glacier Bay 307 GLB Glennallen 308...

  20. 50 CFR Table 14a to Part 679 - Port of Landing Codes 1, Alaska

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 11 2011-10-01 2011-10-01 false Port of Landing Codes 1, Alaska 14a Table... ALASKA Pt. 679, Table 14a Table 14a to Part 679—Port of Landing Codes 1, Alaska Port Name NMFS Code ADF&G... Inlet 124 XIP False Pass 125 FSP Fairbanks 305 FBK Galena 306 GAL Glacier Bay 307 GLB Glennallen 308...

  1. 50 CFR Table 14a to Part 679 - Port of Landing Codes 1, Alaska

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Port of Landing Codes 1, Alaska 14a Table... ALASKA Pt. 679, Table 14a Table 14a to Part 679—Port of Landing Codes 1, Alaska Port Name NMFS Code ADF&G... Inlet 124 XIP False Pass 125 FSP Fairbanks 305 FBK Galena 306 GAL Glacier Bay 307 GLB Glennallen 308...

  2. 50 CFR Table 14a to Part 679 - Port of Landing Codes 1, Alaska

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Port of Landing Codes 1, Alaska 14a Table... ALASKA Pt. 679, Table 14a Table 14a to Part 679—Port of Landing Codes 1, Alaska Port Name NMFS Code ADF&G... Inlet 124 XIP False Pass 125 FSP Fairbanks 305 FBK Galena 306 GAL Glacier Bay 307 GLB Glennallen 308...

  3. Mapa-an object oriented code with a graphical user interface for accelerator design and analysis

    SciTech Connect

    Shasharina, Svetlana G.; Cary, John R.

    1997-02-01

    We developed a code for accelerator modeling which will allow users to create and analyze accelerators through a graphical user interface (GUI). The GUI can read an accelerator from files or create it by adding, removing and changing elements. It also creates 4D orbits and lifetime plots. The code includes a set of accelerator elements classes, C++ utility and GUI libraries. Due to the GUI, the code is easy to use and expand.

  4. Mapa-an object oriented code with a graphical user interface for accelerator design and analysis

    SciTech Connect

    Shasharina, S.G.; Cary, J.R.

    1997-02-01

    We developed a code for accelerator modeling which will allow users to create and analyze accelerators through a graphical user interface (GUI). The GUI can read an accelerator from files or create it by adding, removing and changing elements. It also creates 4D orbits and lifetime plots. The code includes a set of accelerator elements classes, C++ utility and GUI libraries. Due to the GUI, the code is easy to use and expand. {copyright} {ital 1997 American Institute of Physics.}

  5. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  6. A modified code for R-mode correspondence analysis of large-scale problems

    NASA Astrophysics Data System (ADS)

    Shi, Mingren; Carr, James R.

    2001-03-01

    A modified FORTRAN code for R-mode correspondence analysis is presented for large-scale digital image analysis problems. It reduces the required memory capacity by more than a factor of 4 M, where M is the number of variables considered per sample. The original code, CORSPOND, requires 4 NM memory, where N is the total number of samples that are considered. The modified code requires only an N-dimensional vector to form the matrix S. For a PC having 128 MB RAM, the code can be used for a problem with up to N=253,000 and M=30. The code automatically detects, then shifts negative values to zeros or positive values. Applications to grey-level co-occurrence matrix parameter data are used to demonstrate the efficiency of the code.

  7. The non-power model of the genetic code: a paradigm for interpreting genomic information.

    PubMed

    Gonzalez, Diego Luis; Giannerini, Simone; Rosa, Rodolfo

    2016-03-13

    In this article, we present a mathematical framework based on redundant (non-power) representations of integer numbers as a paradigm for the interpretation of genomic information. The core of the approach relies on modelling the degeneracy of the genetic code. The model allows one to explain many features and symmetries of the genetic code and to uncover hidden symmetries. Also, it provides us with new tools for the analysis of genomic sequences. We review briefly three main areas: (i) the Euplotid nuclear code, (ii) the vertebrate mitochondrial code, and (iii) the main coding/decoding strategies used in the three domains of life. In every case, we show how the non-power model is a natural unified framework for describing degeneracy and deriving sound biological hypotheses on protein coding. The approach is rooted on number theory and group theory; nevertheless, we have kept the technical level to a minimum by focusing on key concepts and on the biological implications. PMID:26857679

  8. TVENT1: a computer code for analyzing tornado-induced flow in ventilation systems

    SciTech Connect

    Andrae, R.W.; Tang, P.K.; Gregory, W.S.

    1983-07-01

    TVENT1 is a new version of the TVENT computer code, which was designed to predict the flows and pressures in a ventilation system subjected to a tornado. TVENT1 is essentially the same code but has added features for turning blowers off and on, changing blower speeds, and changing the resistance of dampers and filters. These features make it possible to depict a sequence of events during a single run. Other features also have been added to make the code more versatile. Example problems are included to demonstrate the code's applications.

  9. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    SciTech Connect

    Smith, A.B.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  10. Clustering of neural code words revealed by a first-order phase transition

    NASA Astrophysics Data System (ADS)

    Huang, Haiping; Toyoizumi, Taro

    2016-06-01

    A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.

  11. Clustering of neural code words revealed by a first-order phase transition.

    PubMed

    Huang, Haiping; Toyoizumi, Taro

    2016-06-01

    A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network. PMID:27415307

  12. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    PubMed

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  13. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  14. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 6: piping systems manual. Computer code manual

    SciTech Connect

    Cohen, L.M.

    1982-03-01

    This volume documents the STEALTH piping numerical code, which can simulate the time-dependent flow phenomena that occur in piping systems. This volume also contains the input instructions for the STEALTH piping code, and a sample problem of a pipe flow simulation.

  15. Development and testing of a Monte Carlo code system for analysis of ionization chamber responses

    SciTech Connect

    Johnson, J.O.; Gabriel, T.A.

    1986-01-01

    To predict the perturbation of interactions between radiation and material by the presence of a detector, a differential Monte Carlo computer code system entitled MICAP was developed and tested. This code system determines the neutron, photon, and total response of an ionization chamber to mixed field radiation environments. To demonstrate the ability of MICAP in calculating an ionization chamber response function, a comparison was made to 05S, an established Monte Carlo code extensively used to accurately calibrate liquid organic scintillators. Both code systems modeled an organic scintillator with a parallel beam of monoenergetic neutrons incident on the scintillator. (LEW)

  16. F2D users manual: A two-dimensional compressible gas flow code

    NASA Astrophysics Data System (ADS)

    Suo-Anttila, A.

    1993-08-01

    The F2D computer code is a general purpose, two-dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-corrector solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicability of the code in problems ranging from free fluid flow, shock tubes, and flow in heated porous media.

  17. F2D users manual: A two-dimensional compressible gas flow code

    SciTech Connect

    Suo-Anttila, A.

    1993-08-01

    The F2D computer code is a general purpose, two-dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-corrector solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicability of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.

  18. A new 3-D integral code for computation of accelerator magnets

    SciTech Connect

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab.

  19. Development and validation of GWHEAD, a three-dimensional groundwater head computer code

    SciTech Connect

    Beckmeyer, R.R.; Root, R.W.; Routt, K.R.

    1980-03-01

    A computer code has been developed to solve the groundwater flow equation in three dimensions. The code has finite-difference approximations solved by the strongly implicit solution procedure. Input parameters to the code include hydraulic conductivity, specific storage, porosity, accretion (recharge), and initial hydralic head. These parameters may be input as varying spatially. The hydraulic conductivity may be input as isotropic or anisotropic. The boundaries either may permit flow across them or may be impermeable. The code has been used to model leaky confined groundwater conditions and spherical flow to a continuous point sink, both of which have exact analytical solutions. The results generated by the computer code compare well with those of the analytical solutions. The code was designed to be used to model groundwater flow beneath fuel reprocessing and waste storage areas at the Savannah River Plant.

  20. Codes of environmental management practice: Assessing their potential as a tool for change

    SciTech Connect

    Nash, J.; Ehrenfeld, J.

    1997-12-31

    Codes of environmental management practice emerged as a tool of environmental policy in the late 1980s. Industry and other groups have developed codes for two purposes: to change the environmental behavior of participating firms and to increase public confidence in industry`s commitment to environmental protection. This review examines five codes of environmental management practice: Responsible Care, the International Chamber of Commerce`s Business Charter for Sustainable Development, ISO 14000, the CERES Principles, and The Natural Step. The first three codes have been drafted and promoted primarily by industry; the others have been developed by non-industry groups. These codes have spurred participating firms to introduce new practices, including the institution of environmental management systems, public environmental reporting, and community advisory panels. The extent to which codes are introducing a process of cultural change is considered in terms of four dimensions: new consciousness, norms, organization, and tools. 94 refs., 3 tabs.

  1. The weight hierarchies and chain condition of a class of codes from varieties over finite fields

    NASA Technical Reports Server (NTRS)

    Wu, Xinen; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    The generalized Hamming weights of linear codes were first introduced by Wei. These are fundamental parameters related to the minimal overlap structures of the subcodes and very useful in several fields. It was found that the chain condition of a linear code is convenient in studying the generalized Hamming weights of the product codes. In this paper we consider a class of codes defined over some varieties in projective spaces over finite fields, whose generalized Hamming weights can be determined by studying the orbits of subspaces of the projective spaces under the actions of classical groups over finite fields, i.e., the symplectic groups, the unitary groups and orthogonal groups. We give the weight hierarchies and generalized weight spectra of the codes from Hermitian varieties and prove that the codes satisfy the chain condition.

  2. HZETRN: A heavy ion/nucleon transport code for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.

    1991-01-01

    The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.

  3. Code OK2—A simulation code of ion-beam illumination on an arbitrary shape and structure target

    NASA Astrophysics Data System (ADS)

    Ogoyski, A. I.; Kawata, S.; Someya, T.

    2004-08-01

    For computer simulations on heavy ion beam (HIB) irradiation on a spherical fuel pellet in heavy ion fusion (HIF) the code OK1 was developed and presented in [Comput. Phys. Commun. 157 (2004) 160-172]. The new code OK2 is a modified upgraded computer program for more common purposes in research fields of medical treatment, material processing as well as HIF. OK2 provides computational capabilities of a three-dimensional ion beam energy deposition on a target with an arbitrary shape and structure. Program summaryTitle of program: OK2 Catalogue identifier: ADTZ Other versions of this program [1] : Title of the program: OK1 Catalogue identifier: ADST Program summary URL:http://cpc.cs.qub.as.uk/summaries/ADTZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC (Pentium 4, ˜1 GHz or more recommended) Operating system: Windows or UNIX Program language used: C++ Memory required to execute with typical data: 2048 MB No. of bits in a word: 32 No. of processors used: 1CPU Has the code been vectorized or parallelized: No No. of bytes in distributed program, including test data: 17 334 No of lines in distributed program, including test date: 1487 Distribution format: tar gzip file Nature of physical problem: In research areas of HIF (Heavy Ion Beam Inertial Fusion) energy [1-4] and medical material sciences [5], ion energy deposition profiles need to be evaluated and calculated precisely. Due to a favorable energy deposition behavior of ions in matter [1-4] it is expected that ion beams would be one of preferable candidates in various fields including HIF and material processing. Especially in HIF for a successful fuel ignition and a sufficient fusion energy release, a stringent requirement is imposed on the HIB irradiation non-uniformity, which should be less than a few percent [4,6,7]. In order to meet this requirement we need to evaluate the uniformity of a realistic HIB irradiation and energy deposition pattern. The HIB

  4. Learning a multi-dimensional companding function for lossy source coding.

    PubMed

    Maeda, Shin-ichi; Ishii, Shin

    2009-09-01

    Although the importance of lossy source coding has been growing, the general and practical methodology for its design has not been completely resolved. The well-known vector quantization (VQ) can represent any fixed-length lossy source coding, but requires too much computation resource. Companding vector quantization (CVQ) can reduce the complexity of non-structured VQ by replacing vector quantization with a set of scalar quantizations and can represent a wide class of practically useful VQs. Although an analytical derivation of optimal CVQ is difficult except for very limited cases, optimization using data samples can be performed instead. Here we propose a CVQ optimization method, which includes bit allocation by a newly derived distortion formula as a generalization of Bennett's formula, and test its validity. We applied the method to transform coding and compared the performance of our CVQ with those of Karhunen-Loëve transformation (KLT)-based coding and non-structured VQ. As a consequence, we found that our trained CVQ outperforms not only KLT-based coding but also non-structured VQ in the case of high bit-rate coding of linear mixtures of uniform sources. We also found that trained CVQ even outperformed KLT-based coding in the low bit-rate coding of a Gaussian source. To highlight the advantages of our approach, we also discuss the degradation of non-structured VQ and the limitations of theoretical analyses which are valid for high bit-rate coding.

  5. A mathematical approach to the study of the United States Code

    NASA Astrophysics Data System (ADS)

    Bommarito, Michael J.; Katz, Daniel M.

    2010-10-01

    The United States Code (Code) is a document containing over 22 million words that represents a large and important source of Federal statutory law. Scholars and policy advocates often discuss the direction and magnitude of changes in various aspects of the Code. However, few have mathematically formalized the notions behind these discussions or directly measured the resulting representations. This paper addresses the current state of the literature in two ways. First, we formalize a representation of the United States Code as the union of a hierarchical network and a citation network over vertices containing the language of the Code. This representation reflects the fact that the Code is a hierarchically organized document containing language and explicit citations between provisions. Second, we use this formalization to measure aspects of the Code as codified in October 2008, November 2009, and March 2010. These measurements allow for a characterization of the actual changes in the Code over time. Our findings indicate that in the recent past, the Code has grown in its amount of structure, interdependence, and language.

  6. The Basic Aerodynamics Research Tunnel - A facility dedicated to code validation

    NASA Technical Reports Server (NTRS)

    Sellers, William L., III; Kjelgaard, Scott O.

    1988-01-01

    Computational fluid dynamics code validation requirements are discussed together with the need for close interaction between experiment and code development. Code validation experiments require a great deal of data and for the experiments to be successful, a highly-productive research facility is required. A description is provided of the NASA Langley Basic Aerodynamics Research Tunnel (BART); especially the instrumentation and experimental techniques that make the facility ideally suited to code validation experiments. Results are presented from recent tests which illustrate the techniques used in BART.

  7. DOGS: a collection of graphics for support of discrete ordinates codes

    SciTech Connect

    Ingersoll, D.T.; Slater, C.O.

    1980-03-01

    A collection of computer codes called DOGS (Discrete Ordinates Graphics Support) has been developed to assist in the display and presentation of data generated by commonly used discrete ordinates transport codes. The DOGS codes include: EGAD for plotting two-dimensional geometries, ISOPLOT4 for plotting 2-D fluxes in a contour line fashion, FORM for plotting 2-D fluxes in a 3-D surface fashion, ACTUAL for calculating 2-D activities, TOOTH for calculating and plotting space-energy contributon fluxes, and ASPECT for plotting energy spectra. All of the codes use FIDO input formats and DISSPLA graphics software including the DISSPOP post processors.

  8. Ducted-Fan Engine Acoustic Predictions using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor; temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier-Stokes codes can be used both to generate and propagate rotor/stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  9. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    SciTech Connect

    Talou, P.; Chadwick, M. B.; Dietrich, F. S.; Herman, M.; Kawano, T.; Konig, A.; Obložinský, P.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  10. Ducted-Fan Engine Acoustic Predictions Using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor, temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier Stokes codes can be used both to generate and propagate rotor-stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  11. TEMP: a computer code to calculate fuel pin temperatures during a transient. [LMFBR

    SciTech Connect

    Bard, F E; Christensen, B Y; Gneiting, B C

    1980-04-01

    The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method.

  12. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  13. A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.

    1989-01-01

    A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.

  14. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    ERIC Educational Resources Information Center

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  15. WHISTBT: a 1-1/2-D radial-transport code for bumpy tori

    SciTech Connect

    Hastings, D.E.; Houlberg, W.A.; Attenberger, S.E.; Lee, D.K.

    1983-10-01

    The computer code WHISTBT has been developed from the Oak Ridge National Laboratory WHIST code to study radial transport in bumpy tori. The code can handle both positive and negative ad hoc electric fields for devices ranging from the size of ELMO Bumpy Torus-Scale (EBT-S) to a reactor-type device, EBT-R. Fueling can be by gas puffing or pellets; heating can be by injection of rf power or neutral beams.

  16. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    SciTech Connect

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  17. The Code of the Street and Romantic Relationships: A dyadic analysis

    PubMed Central

    Barr, Ashley B.; Simons, Ronald L.; Stewart, Eric A.

    2012-01-01

    Since its publication, Elijah Anderson’s (1999) code of the street thesis has found support in studies connecting disadvantage to the internalization of street-oriented values and an associated lifestyle of violent/deviant behavior. This primary emphasis on deviance in public arenas has precluded researchers from examining the implications of the code of the street for less public arenas, like intimate relationships. In an effort to understand if and how the endorsement of the street code may infiltrate such relationships, the present study examines the associations between the code of the street and relationship satisfaction and commitment among young adults involved in heterosexual romantic relationships. Using a dyadic approach, we find that street code orientation, in general, negatively predicts satisfaction and commitment, in part due to increased relationship hostility/conflict associated with the internalization of the code. Gender differences in these associations are considered and discussed at length. PMID:23504000

  18. Fallout computer codes. A bibliographic perspective. Technical report, 1 November 1992-1 September 1993

    SciTech Connect

    Rowland, R.

    1994-07-01

    This report is a summary overview of the basic features and differences among the major radioactive fallout models and computer codes that are either in current use or that form the basis for more contemporary codes and other computational tools. The DELFIC, WSEG-10, KDFOC2, SEER3, and DNAF-1 codes and the EM-1 model are addressed. The review is based only on the information that is available in the general body of literature. This report describes the fallout process, gives an overview of each code/model, summarizes how each code/model handles the basic fallout parameters (initial cloud, particle distributions, fall mechanics, total activity and activity to dose rate conversion, and transport), cites the literature references used, and provides an annotated bibliography for other fallout code literature that was not cited. Nuclear weapons, Radiation, Radioactivity, Fallout, DELFIC, WSEG, Nuclear weapon effects, KDFOC, SEER, DNAF, EM-1.

  19. Implementation of a 3D mixing layer code on parallel computers

    NASA Technical Reports Server (NTRS)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.

    1995-01-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  20. Ideas for Advancing Code Sharing: A Different Kind of Hack Day

    NASA Astrophysics Data System (ADS)

    Teuben, P.; Allen, A.; Berriman, B.; DuPrie, K.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Wallin, J. F.

    2014-05-01

    How do we as a community encourage the reuse of software for telescope operations, data processing, and ? How can we support making codes used in research available for others to examine? Continuing the discussion from last year Bring out your codes! BoF session, participants separated into groups to brainstorm ideas to mitigate factors which inhibit code sharing and nurture those which encourage code sharing. The BoF concluded with the sharing of ideas that arose from the brainstorming sessions and a brief summary by the moderator.